Dec 12 19:40:40.942037 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 19:40:40.943107 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 19:40:40.943129 kernel: BIOS-provided physical RAM map: Dec 12 19:40:40.943140 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 19:40:40.943158 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 19:40:40.943169 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 19:40:40.943181 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 12 19:40:40.943191 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 12 19:40:40.943202 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 19:40:40.943212 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 19:40:40.943223 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 19:40:40.943233 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 19:40:40.943244 kernel: NX (Execute Disable) protection: active Dec 12 19:40:40.943260 kernel: APIC: Static calls initialized Dec 12 19:40:40.943272 kernel: SMBIOS 2.8 present. Dec 12 19:40:40.943284 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 12 19:40:40.943296 kernel: DMI: Memory slots populated: 1/1 Dec 12 19:40:40.943307 kernel: Hypervisor detected: KVM Dec 12 19:40:40.943318 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 12 19:40:40.943334 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 19:40:40.943345 kernel: kvm-clock: using sched offset of 5796264705 cycles Dec 12 19:40:40.943358 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 19:40:40.943369 kernel: tsc: Detected 2499.998 MHz processor Dec 12 19:40:40.943381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 19:40:40.943393 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 19:40:40.943404 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 12 19:40:40.943416 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 19:40:40.943427 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 19:40:40.943443 kernel: Using GB pages for direct mapping Dec 12 19:40:40.943454 kernel: ACPI: Early table checksum verification disabled Dec 12 19:40:40.943466 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 12 19:40:40.943477 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943489 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943500 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943512 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 12 19:40:40.943523 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943535 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943568 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943581 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:40:40.943593 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 12 19:40:40.943611 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 12 19:40:40.943624 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 12 19:40:40.943636 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 12 19:40:40.943652 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 12 19:40:40.943664 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 12 19:40:40.943676 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 12 19:40:40.943688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 19:40:40.943700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 19:40:40.943712 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 12 19:40:40.943724 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 12 19:40:40.943736 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 12 19:40:40.943753 kernel: Zone ranges: Dec 12 19:40:40.943765 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 19:40:40.943777 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 12 19:40:40.943789 kernel: Normal empty Dec 12 19:40:40.943801 kernel: Device empty Dec 12 19:40:40.943813 kernel: Movable zone start for each node Dec 12 19:40:40.943825 kernel: Early memory node ranges Dec 12 19:40:40.943836 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 19:40:40.943848 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 12 19:40:40.943860 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 12 19:40:40.943877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 19:40:40.943889 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 19:40:40.943901 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 12 19:40:40.943913 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 19:40:40.943925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 19:40:40.943937 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 19:40:40.943949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 19:40:40.943960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 19:40:40.943972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 19:40:40.943989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 19:40:40.944001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 19:40:40.944023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 19:40:40.944035 kernel: TSC deadline timer available Dec 12 19:40:40.944047 kernel: CPU topo: Max. logical packages: 16 Dec 12 19:40:40.944059 kernel: CPU topo: Max. logical dies: 16 Dec 12 19:40:40.944071 kernel: CPU topo: Max. dies per package: 1 Dec 12 19:40:40.944083 kernel: CPU topo: Max. threads per core: 1 Dec 12 19:40:40.945138 kernel: CPU topo: Num. cores per package: 1 Dec 12 19:40:40.945159 kernel: CPU topo: Num. threads per package: 1 Dec 12 19:40:40.945171 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 12 19:40:40.945183 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 19:40:40.945195 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 19:40:40.945207 kernel: Booting paravirtualized kernel on KVM Dec 12 19:40:40.945219 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 19:40:40.945232 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 12 19:40:40.945244 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 12 19:40:40.945256 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 12 19:40:40.945272 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 12 19:40:40.945284 kernel: kvm-guest: PV spinlocks enabled Dec 12 19:40:40.945297 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 19:40:40.945310 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 19:40:40.945323 kernel: random: crng init done Dec 12 19:40:40.945335 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 19:40:40.945347 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 19:40:40.945359 kernel: Fallback order for Node 0: 0 Dec 12 19:40:40.945375 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 12 19:40:40.945387 kernel: Policy zone: DMA32 Dec 12 19:40:40.945399 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 19:40:40.945411 kernel: software IO TLB: area num 16. Dec 12 19:40:40.945423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 12 19:40:40.945435 kernel: Kernel/User page tables isolation: enabled Dec 12 19:40:40.945447 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 19:40:40.945459 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 19:40:40.945471 kernel: Dynamic Preempt: voluntary Dec 12 19:40:40.945487 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 19:40:40.945512 kernel: rcu: RCU event tracing is enabled. Dec 12 19:40:40.945524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 12 19:40:40.945536 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 19:40:40.945548 kernel: Rude variant of Tasks RCU enabled. Dec 12 19:40:40.945572 kernel: Tracing variant of Tasks RCU enabled. Dec 12 19:40:40.945583 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 19:40:40.945593 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 12 19:40:40.945604 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 19:40:40.945627 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 19:40:40.945643 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 19:40:40.945655 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 12 19:40:40.945666 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 19:40:40.945688 kernel: Console: colour VGA+ 80x25 Dec 12 19:40:40.945704 kernel: printk: legacy console [tty0] enabled Dec 12 19:40:40.945716 kernel: printk: legacy console [ttyS0] enabled Dec 12 19:40:40.945728 kernel: ACPI: Core revision 20240827 Dec 12 19:40:40.945739 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 19:40:40.945751 kernel: x2apic enabled Dec 12 19:40:40.945763 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 19:40:40.945775 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 12 19:40:40.945792 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 12 19:40:40.945804 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 19:40:40.945828 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 19:40:40.945840 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 19:40:40.945865 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 19:40:40.945891 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 19:40:40.945913 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 19:40:40.945927 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 19:40:40.945939 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 19:40:40.945952 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 19:40:40.945964 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 19:40:40.945976 kernel: MMIO Stale Data: Unknown: No mitigations Dec 12 19:40:40.945988 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 12 19:40:40.946000 kernel: active return thunk: its_return_thunk Dec 12 19:40:40.946023 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 19:40:40.946036 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 19:40:40.946054 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 19:40:40.946067 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 19:40:40.946079 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 19:40:40.947126 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 19:40:40.947143 kernel: Freeing SMP alternatives memory: 32K Dec 12 19:40:40.947155 kernel: pid_max: default: 32768 minimum: 301 Dec 12 19:40:40.947168 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 19:40:40.947180 kernel: landlock: Up and running. Dec 12 19:40:40.947193 kernel: SELinux: Initializing. Dec 12 19:40:40.947205 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 19:40:40.947218 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 19:40:40.947231 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 12 19:40:40.947250 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 12 19:40:40.947263 kernel: signal: max sigframe size: 1776 Dec 12 19:40:40.947276 kernel: rcu: Hierarchical SRCU implementation. Dec 12 19:40:40.947289 kernel: rcu: Max phase no-delay instances is 400. Dec 12 19:40:40.947302 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 12 19:40:40.947315 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 19:40:40.947328 kernel: smp: Bringing up secondary CPUs ... Dec 12 19:40:40.947340 kernel: smpboot: x86: Booting SMP configuration: Dec 12 19:40:40.947353 kernel: .... node #0, CPUs: #1 Dec 12 19:40:40.947370 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 19:40:40.947383 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 12 19:40:40.947396 kernel: Memory: 1887480K/2096616K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 203120K reserved, 0K cma-reserved) Dec 12 19:40:40.947409 kernel: devtmpfs: initialized Dec 12 19:40:40.947422 kernel: x86/mm: Memory block size: 128MB Dec 12 19:40:40.947435 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 19:40:40.947447 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 12 19:40:40.947460 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 19:40:40.947472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 19:40:40.947489 kernel: audit: initializing netlink subsys (disabled) Dec 12 19:40:40.947502 kernel: audit: type=2000 audit(1765568436.526:1): state=initialized audit_enabled=0 res=1 Dec 12 19:40:40.947515 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 19:40:40.947527 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 19:40:40.947540 kernel: cpuidle: using governor menu Dec 12 19:40:40.947553 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 19:40:40.947565 kernel: dca service started, version 1.12.1 Dec 12 19:40:40.947578 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 19:40:40.947590 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 19:40:40.947620 kernel: PCI: Using configuration type 1 for base access Dec 12 19:40:40.947633 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 19:40:40.947645 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 19:40:40.947657 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 19:40:40.947669 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 19:40:40.947682 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 19:40:40.947694 kernel: ACPI: Added _OSI(Module Device) Dec 12 19:40:40.947706 kernel: ACPI: Added _OSI(Processor Device) Dec 12 19:40:40.947718 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 19:40:40.947746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 19:40:40.947758 kernel: ACPI: Interpreter enabled Dec 12 19:40:40.947770 kernel: ACPI: PM: (supports S0 S5) Dec 12 19:40:40.947781 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 19:40:40.947793 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 19:40:40.947805 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 19:40:40.947817 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 19:40:40.947828 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 19:40:40.950190 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 19:40:40.950386 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 19:40:40.950559 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 19:40:40.950580 kernel: PCI host bridge to bus 0000:00 Dec 12 19:40:40.950768 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 19:40:40.950924 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 19:40:40.951111 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 19:40:40.951277 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 19:40:40.951429 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 19:40:40.951581 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 12 19:40:40.951734 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 19:40:40.951942 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 19:40:40.956227 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 12 19:40:40.956433 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 12 19:40:40.956606 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 12 19:40:40.956773 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 12 19:40:40.956955 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 19:40:40.957195 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.957414 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 12 19:40:40.957584 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 19:40:40.957758 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 12 19:40:40.957923 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 19:40:40.961357 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.961535 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 12 19:40:40.961713 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 19:40:40.961879 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 12 19:40:40.962061 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 19:40:40.962298 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.962465 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 12 19:40:40.962632 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 19:40:40.962796 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 12 19:40:40.962962 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 19:40:40.963185 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.963353 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 12 19:40:40.963526 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 19:40:40.963690 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 12 19:40:40.963902 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 19:40:40.964115 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.964286 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 12 19:40:40.964452 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 19:40:40.964615 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 12 19:40:40.964781 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 19:40:40.964969 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.967762 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 12 19:40:40.967943 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 19:40:40.968154 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 12 19:40:40.968346 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 19:40:40.968530 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.968697 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 12 19:40:40.968872 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 19:40:40.969056 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 12 19:40:40.970879 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 19:40:40.971121 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:40:40.971303 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 12 19:40:40.971474 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 19:40:40.971654 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 12 19:40:40.971820 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 19:40:40.971998 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 19:40:40.972201 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 12 19:40:40.972371 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 12 19:40:40.972588 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 12 19:40:40.972758 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 12 19:40:40.972947 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 19:40:40.974859 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 12 19:40:40.975051 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 12 19:40:40.975308 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 12 19:40:40.975494 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 19:40:40.975719 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 19:40:40.975913 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 19:40:40.977557 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 12 19:40:40.977739 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 12 19:40:40.977916 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 19:40:40.978132 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 19:40:40.978319 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 12 19:40:40.978533 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 12 19:40:40.978722 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 19:40:40.978893 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 12 19:40:40.979103 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 19:40:40.979317 kernel: pci_bus 0000:02: extended config space not accessible Dec 12 19:40:40.979511 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 12 19:40:40.979688 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 12 19:40:40.979861 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 19:40:40.981967 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 19:40:40.982200 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 12 19:40:40.982375 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 19:40:40.984302 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 19:40:40.984491 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 12 19:40:40.984670 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 19:40:40.984855 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 19:40:40.985043 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 19:40:40.985267 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 19:40:40.985440 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 19:40:40.985610 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 19:40:40.985632 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 19:40:40.985646 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 19:40:40.985659 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 19:40:40.985680 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 19:40:40.985693 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 19:40:40.985707 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 19:40:40.985720 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 19:40:40.985733 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 19:40:40.985746 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 19:40:40.985758 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 19:40:40.985782 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 19:40:40.985794 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 19:40:40.985812 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 19:40:40.985825 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 19:40:40.985839 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 19:40:40.985852 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 19:40:40.985865 kernel: iommu: Default domain type: Translated Dec 12 19:40:40.985878 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 19:40:40.985891 kernel: PCI: Using ACPI for IRQ routing Dec 12 19:40:40.985904 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 19:40:40.985916 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 19:40:40.985934 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 12 19:40:40.987174 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 19:40:40.987356 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 19:40:40.987525 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 19:40:40.987545 kernel: vgaarb: loaded Dec 12 19:40:40.987570 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 19:40:40.987584 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 19:40:40.987598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 19:40:40.987619 kernel: pnp: PnP ACPI init Dec 12 19:40:40.987811 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 19:40:40.987834 kernel: pnp: PnP ACPI: found 5 devices Dec 12 19:40:40.987847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 19:40:40.987860 kernel: NET: Registered PF_INET protocol family Dec 12 19:40:40.987873 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 19:40:40.987887 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 19:40:40.987900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 19:40:40.987920 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 19:40:40.987933 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 19:40:40.987946 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 19:40:40.987959 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 19:40:40.987972 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 19:40:40.987985 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 19:40:40.987997 kernel: NET: Registered PF_XDP protocol family Dec 12 19:40:40.990224 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 12 19:40:40.990411 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 19:40:40.990592 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 19:40:40.990761 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 19:40:40.990928 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 19:40:40.993136 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 19:40:40.993354 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 19:40:40.993523 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 19:40:40.993691 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 19:40:40.993856 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 19:40:40.994047 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 19:40:40.994258 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 19:40:40.994426 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 19:40:40.994591 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 19:40:40.994755 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 19:40:40.994920 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 19:40:40.995125 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 19:40:40.995328 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 12 19:40:40.995501 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 19:40:40.995666 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 12 19:40:40.995830 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 12 19:40:40.995995 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 19:40:40.998222 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 19:40:40.998403 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 12 19:40:40.998576 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 12 19:40:40.998775 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 19:40:40.998945 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 19:40:40.999157 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 12 19:40:40.999328 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 12 19:40:40.999503 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 19:40:40.999670 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 19:40:40.999835 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 12 19:40:41.000000 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 12 19:40:41.002269 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 19:40:41.002481 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 19:40:41.002696 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 12 19:40:41.002981 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 12 19:40:41.003257 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 19:40:41.003438 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 19:40:41.003776 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 12 19:40:41.003990 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 12 19:40:41.008219 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 19:40:41.008459 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 19:40:41.008635 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 12 19:40:41.008805 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 12 19:40:41.008973 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 19:40:41.009183 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 19:40:41.009376 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 12 19:40:41.009583 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 12 19:40:41.009752 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 19:40:41.009913 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 19:40:41.010149 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 19:40:41.010322 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 19:40:41.010476 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 19:40:41.010628 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 19:40:41.010779 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 12 19:40:41.010960 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 12 19:40:41.011301 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 12 19:40:41.011465 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 19:40:41.011635 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 12 19:40:41.011808 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 12 19:40:41.011967 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 12 19:40:41.012196 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 19:40:41.012367 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 12 19:40:41.012526 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 12 19:40:41.012682 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 19:40:41.012855 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 12 19:40:41.013027 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 12 19:40:41.013224 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 19:40:41.013401 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 12 19:40:41.013559 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 12 19:40:41.013716 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 19:40:41.013882 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 12 19:40:41.014056 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 12 19:40:41.014234 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 19:40:41.014422 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 12 19:40:41.014587 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 12 19:40:41.014744 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 19:40:41.014917 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 12 19:40:41.015118 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 12 19:40:41.015283 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 19:40:41.015305 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 19:40:41.015320 kernel: PCI: CLS 0 bytes, default 64 Dec 12 19:40:41.015341 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 19:40:41.015355 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 12 19:40:41.015369 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 19:40:41.015383 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 12 19:40:41.015397 kernel: Initialise system trusted keyrings Dec 12 19:40:41.015411 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 19:40:41.015425 kernel: Key type asymmetric registered Dec 12 19:40:41.015443 kernel: Asymmetric key parser 'x509' registered Dec 12 19:40:41.015456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 19:40:41.015474 kernel: io scheduler mq-deadline registered Dec 12 19:40:41.015488 kernel: io scheduler kyber registered Dec 12 19:40:41.015501 kernel: io scheduler bfq registered Dec 12 19:40:41.015688 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 12 19:40:41.015861 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 12 19:40:41.016044 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.016233 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 12 19:40:41.016410 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 12 19:40:41.016619 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.016804 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 12 19:40:41.016972 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 12 19:40:41.017194 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.017364 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 12 19:40:41.017540 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 12 19:40:41.017708 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.017879 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 12 19:40:41.018061 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 12 19:40:41.018288 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.018459 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 12 19:40:41.018633 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 12 19:40:41.018800 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.018974 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 12 19:40:41.019185 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 12 19:40:41.019355 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.019521 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 12 19:40:41.019696 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 12 19:40:41.019902 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:40:41.019925 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 19:40:41.019940 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 19:40:41.019954 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 19:40:41.019967 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 19:40:41.019981 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 19:40:41.019995 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 19:40:41.020101 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 19:40:41.020121 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 19:40:41.020152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 19:40:41.020378 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 19:40:41.020541 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 19:40:41.020699 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T19:40:40 UTC (1765568440) Dec 12 19:40:41.020855 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 19:40:41.020883 kernel: intel_pstate: CPU model not supported Dec 12 19:40:41.020898 kernel: NET: Registered PF_INET6 protocol family Dec 12 19:40:41.020912 kernel: Segment Routing with IPv6 Dec 12 19:40:41.020926 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 19:40:41.020939 kernel: NET: Registered PF_PACKET protocol family Dec 12 19:40:41.020953 kernel: Key type dns_resolver registered Dec 12 19:40:41.020966 kernel: IPI shorthand broadcast: enabled Dec 12 19:40:41.020980 kernel: sched_clock: Marking stable (3505004110, 230077820)->(3862351879, -127269949) Dec 12 19:40:41.020994 kernel: registered taskstats version 1 Dec 12 19:40:41.021020 kernel: Loading compiled-in X.509 certificates Dec 12 19:40:41.021040 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 19:40:41.021054 kernel: Demotion targets for Node 0: null Dec 12 19:40:41.021067 kernel: Key type .fscrypt registered Dec 12 19:40:41.021081 kernel: Key type fscrypt-provisioning registered Dec 12 19:40:41.021124 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 19:40:41.021139 kernel: ima: Allocated hash algorithm: sha1 Dec 12 19:40:41.021153 kernel: ima: No architecture policies found Dec 12 19:40:41.021166 kernel: clk: Disabling unused clocks Dec 12 19:40:41.021180 kernel: Warning: unable to open an initial console. Dec 12 19:40:41.021200 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 19:40:41.021214 kernel: Write protecting the kernel read-only data: 40960k Dec 12 19:40:41.021227 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 19:40:41.021241 kernel: Run /init as init process Dec 12 19:40:41.021254 kernel: with arguments: Dec 12 19:40:41.021268 kernel: /init Dec 12 19:40:41.021281 kernel: with environment: Dec 12 19:40:41.021294 kernel: HOME=/ Dec 12 19:40:41.021307 kernel: TERM=linux Dec 12 19:40:41.021335 systemd[1]: Successfully made /usr/ read-only. Dec 12 19:40:41.021355 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 19:40:41.021370 systemd[1]: Detected virtualization kvm. Dec 12 19:40:41.021384 systemd[1]: Detected architecture x86-64. Dec 12 19:40:41.021398 systemd[1]: Running in initrd. Dec 12 19:40:41.021412 systemd[1]: No hostname configured, using default hostname. Dec 12 19:40:41.021426 systemd[1]: Hostname set to . Dec 12 19:40:41.021446 systemd[1]: Initializing machine ID from VM UUID. Dec 12 19:40:41.021461 systemd[1]: Queued start job for default target initrd.target. Dec 12 19:40:41.021475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 19:40:41.021490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 19:40:41.021505 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 19:40:41.021519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 19:40:41.021534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 19:40:41.021554 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 19:40:41.021569 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 19:40:41.021584 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 19:40:41.021598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 19:40:41.021613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 19:40:41.021627 systemd[1]: Reached target paths.target - Path Units. Dec 12 19:40:41.021641 systemd[1]: Reached target slices.target - Slice Units. Dec 12 19:40:41.021655 systemd[1]: Reached target swap.target - Swaps. Dec 12 19:40:41.021675 systemd[1]: Reached target timers.target - Timer Units. Dec 12 19:40:41.021689 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 19:40:41.021704 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 19:40:41.021718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 19:40:41.021733 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 19:40:41.021747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 19:40:41.021761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 19:40:41.021775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 19:40:41.021790 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 19:40:41.021809 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 19:40:41.021823 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 19:40:41.021838 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 19:40:41.021852 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 19:40:41.021867 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 19:40:41.021881 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 19:40:41.021895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 19:40:41.021910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:40:41.021929 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 19:40:41.021944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 19:40:41.022028 systemd-journald[210]: Collecting audit messages is disabled. Dec 12 19:40:41.022072 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 19:40:41.022104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 19:40:41.022120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 19:40:41.022134 kernel: Bridge firewalling registered Dec 12 19:40:41.022148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 19:40:41.022170 systemd-journald[210]: Journal started Dec 12 19:40:41.022205 systemd-journald[210]: Runtime Journal (/run/log/journal/97f9087a9ca54f77a016ee35a174b836) is 4.7M, max 37.8M, 33.1M free. Dec 12 19:40:40.973159 systemd-modules-load[212]: Inserted module 'overlay' Dec 12 19:40:41.075658 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 19:40:41.008311 systemd-modules-load[212]: Inserted module 'br_netfilter' Dec 12 19:40:41.076688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:40:41.077987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 19:40:41.081579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 19:40:41.084250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 19:40:41.087276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 19:40:41.091243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 19:40:41.119900 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 19:40:41.122776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 19:40:41.123868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 19:40:41.129821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 19:40:41.130891 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 19:40:41.134300 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 19:40:41.138271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 19:40:41.167634 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 19:40:41.192567 systemd-resolved[250]: Positive Trust Anchors: Dec 12 19:40:41.192602 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 19:40:41.192653 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 19:40:41.202072 systemd-resolved[250]: Defaulting to hostname 'linux'. Dec 12 19:40:41.205246 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 19:40:41.206878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 19:40:41.288176 kernel: SCSI subsystem initialized Dec 12 19:40:41.299113 kernel: Loading iSCSI transport class v2.0-870. Dec 12 19:40:41.313133 kernel: iscsi: registered transport (tcp) Dec 12 19:40:41.340248 kernel: iscsi: registered transport (qla4xxx) Dec 12 19:40:41.340324 kernel: QLogic iSCSI HBA Driver Dec 12 19:40:41.367953 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 19:40:41.387133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 19:40:41.388565 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 19:40:41.454226 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 19:40:41.458366 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 19:40:41.529155 kernel: raid6: sse2x4 gen() 13057 MB/s Dec 12 19:40:41.542135 kernel: raid6: sse2x2 gen() 8808 MB/s Dec 12 19:40:41.560886 kernel: raid6: sse2x1 gen() 9035 MB/s Dec 12 19:40:41.560969 kernel: raid6: using algorithm sse2x4 gen() 13057 MB/s Dec 12 19:40:41.579958 kernel: raid6: .... xor() 7451 MB/s, rmw enabled Dec 12 19:40:41.580028 kernel: raid6: using ssse3x2 recovery algorithm Dec 12 19:40:41.606120 kernel: xor: automatically using best checksumming function avx Dec 12 19:40:41.803149 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 19:40:41.813260 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 19:40:41.817205 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 19:40:41.852073 systemd-udevd[459]: Using default interface naming scheme 'v255'. Dec 12 19:40:41.861583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 19:40:41.865810 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 19:40:41.898191 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Dec 12 19:40:41.935139 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 19:40:41.937954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 19:40:42.062318 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 19:40:42.067156 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 19:40:42.194565 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 12 19:40:42.202263 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 19:40:42.211105 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 19:40:42.227793 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 19:40:42.227880 kernel: GPT:17805311 != 125829119 Dec 12 19:40:42.227900 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 19:40:42.230388 kernel: GPT:17805311 != 125829119 Dec 12 19:40:42.230439 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 19:40:42.232431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 19:40:42.260115 kernel: ACPI: bus type USB registered Dec 12 19:40:42.263732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 19:40:42.264887 kernel: usbcore: registered new interface driver usbfs Dec 12 19:40:42.265231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:40:42.272121 kernel: usbcore: registered new interface driver hub Dec 12 19:40:42.272164 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 19:40:42.272184 kernel: AES CTR mode by8 optimization enabled Dec 12 19:40:42.272977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:40:42.277389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:40:42.280433 kernel: usbcore: registered new device driver usb Dec 12 19:40:42.301114 kernel: libata version 3.00 loaded. Dec 12 19:40:42.306212 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 19:40:42.366131 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 19:40:42.366471 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 19:40:42.370132 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 19:40:42.370356 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 19:40:42.370558 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 19:40:42.374942 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 19:40:42.472917 kernel: scsi host0: ahci Dec 12 19:40:42.473297 kernel: scsi host1: ahci Dec 12 19:40:42.473515 kernel: scsi host2: ahci Dec 12 19:40:42.473725 kernel: scsi host3: ahci Dec 12 19:40:42.473954 kernel: scsi host4: ahci Dec 12 19:40:42.474210 kernel: scsi host5: ahci Dec 12 19:40:42.474429 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Dec 12 19:40:42.474459 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Dec 12 19:40:42.474484 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Dec 12 19:40:42.474502 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Dec 12 19:40:42.474519 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Dec 12 19:40:42.474537 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Dec 12 19:40:42.472576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:40:42.504801 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 19:40:42.515114 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 19:40:42.515954 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 19:40:42.529846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 19:40:42.531961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 19:40:42.553105 disk-uuid[614]: Primary Header is updated. Dec 12 19:40:42.553105 disk-uuid[614]: Secondary Entries is updated. Dec 12 19:40:42.553105 disk-uuid[614]: Secondary Header is updated. Dec 12 19:40:42.559123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 19:40:42.568119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 19:40:42.695130 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 19:40:42.711782 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 19:40:42.711871 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 19:40:42.711891 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 19:40:42.711917 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 19:40:42.711934 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 19:40:42.733654 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 12 19:40:42.734053 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 12 19:40:42.739122 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 19:40:42.742252 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 12 19:40:42.742477 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 12 19:40:42.744110 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 12 19:40:42.748665 kernel: hub 1-0:1.0: USB hub found Dec 12 19:40:42.748964 kernel: hub 1-0:1.0: 4 ports detected Dec 12 19:40:42.753101 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 19:40:42.753373 kernel: hub 2-0:1.0: USB hub found Dec 12 19:40:42.753593 kernel: hub 2-0:1.0: 4 ports detected Dec 12 19:40:42.781408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 19:40:42.784715 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 19:40:42.786472 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 19:40:42.787220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 19:40:42.790095 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 19:40:42.814377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 19:40:42.984145 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 19:40:43.127148 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 19:40:43.134397 kernel: usbcore: registered new interface driver usbhid Dec 12 19:40:43.134438 kernel: usbhid: USB HID core driver Dec 12 19:40:43.142613 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 12 19:40:43.142652 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 12 19:40:43.570009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 19:40:43.571524 disk-uuid[615]: The operation has completed successfully. Dec 12 19:40:43.628909 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 19:40:43.629127 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 19:40:43.677840 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 19:40:43.700840 sh[642]: Success Dec 12 19:40:43.726204 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 19:40:43.726302 kernel: device-mapper: uevent: version 1.0.3 Dec 12 19:40:43.730113 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 19:40:43.742118 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Dec 12 19:40:43.795908 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 19:40:43.800559 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 19:40:43.817123 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 19:40:43.832315 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (654) Dec 12 19:40:43.835478 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 19:40:43.835523 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:40:43.846997 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 19:40:43.847075 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 19:40:43.849686 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 19:40:43.851868 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 19:40:43.852760 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 19:40:43.853925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 19:40:43.858302 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 19:40:43.894149 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (687) Dec 12 19:40:43.897583 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 19:40:43.897621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:40:43.905572 kernel: BTRFS info (device vda6): turning on async discard Dec 12 19:40:43.905639 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 19:40:43.913114 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 19:40:43.915104 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 19:40:43.918368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 19:40:44.001990 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 19:40:44.007306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 19:40:44.064936 systemd-networkd[823]: lo: Link UP Dec 12 19:40:44.066295 systemd-networkd[823]: lo: Gained carrier Dec 12 19:40:44.068519 systemd-networkd[823]: Enumeration completed Dec 12 19:40:44.068685 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 19:40:44.069913 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 19:40:44.069920 systemd-networkd[823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 19:40:44.071225 systemd-networkd[823]: eth0: Link UP Dec 12 19:40:44.071473 systemd-networkd[823]: eth0: Gained carrier Dec 12 19:40:44.071487 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 19:40:44.075295 systemd[1]: Reached target network.target - Network. Dec 12 19:40:44.116303 systemd-networkd[823]: eth0: DHCPv4 address 10.244.20.246/30, gateway 10.244.20.245 acquired from 10.244.20.245 Dec 12 19:40:44.149928 ignition[742]: Ignition 2.22.0 Dec 12 19:40:44.149951 ignition[742]: Stage: fetch-offline Dec 12 19:40:44.150065 ignition[742]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:44.153177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 19:40:44.150105 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:44.150302 ignition[742]: parsed url from cmdline: "" Dec 12 19:40:44.150309 ignition[742]: no config URL provided Dec 12 19:40:44.150319 ignition[742]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 19:40:44.150335 ignition[742]: no config at "/usr/lib/ignition/user.ign" Dec 12 19:40:44.150355 ignition[742]: failed to fetch config: resource requires networking Dec 12 19:40:44.158267 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 19:40:44.150935 ignition[742]: Ignition finished successfully Dec 12 19:40:44.206035 ignition[832]: Ignition 2.22.0 Dec 12 19:40:44.206057 ignition[832]: Stage: fetch Dec 12 19:40:44.208203 ignition[832]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:44.208228 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:44.208354 ignition[832]: parsed url from cmdline: "" Dec 12 19:40:44.208360 ignition[832]: no config URL provided Dec 12 19:40:44.208370 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 19:40:44.208385 ignition[832]: no config at "/usr/lib/ignition/user.ign" Dec 12 19:40:44.208581 ignition[832]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 19:40:44.208630 ignition[832]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 19:40:44.208734 ignition[832]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 19:40:44.230912 ignition[832]: GET result: OK Dec 12 19:40:44.231342 ignition[832]: parsing config with SHA512: 35725bd8f99c6483a166076ab3957e0238debf64885f1ee64cb38f169551580b2ee4a4f6c955726c701d0fa14055e500a31c36c0a410872d72050128d02e65f9 Dec 12 19:40:44.240774 unknown[832]: fetched base config from "system" Dec 12 19:40:44.241191 ignition[832]: fetch: fetch complete Dec 12 19:40:44.240791 unknown[832]: fetched base config from "system" Dec 12 19:40:44.241200 ignition[832]: fetch: fetch passed Dec 12 19:40:44.240801 unknown[832]: fetched user config from "openstack" Dec 12 19:40:44.241266 ignition[832]: Ignition finished successfully Dec 12 19:40:44.245174 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 19:40:44.247357 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 19:40:44.284698 ignition[838]: Ignition 2.22.0 Dec 12 19:40:44.285982 ignition[838]: Stage: kargs Dec 12 19:40:44.286197 ignition[838]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:44.286216 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:44.289264 ignition[838]: kargs: kargs passed Dec 12 19:40:44.289370 ignition[838]: Ignition finished successfully Dec 12 19:40:44.291202 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 19:40:44.294903 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 19:40:44.332936 ignition[844]: Ignition 2.22.0 Dec 12 19:40:44.332975 ignition[844]: Stage: disks Dec 12 19:40:44.333194 ignition[844]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:44.333213 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:44.334204 ignition[844]: disks: disks passed Dec 12 19:40:44.336587 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 19:40:44.334282 ignition[844]: Ignition finished successfully Dec 12 19:40:44.338531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 19:40:44.339308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 19:40:44.340778 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 19:40:44.342335 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 19:40:44.343908 systemd[1]: Reached target basic.target - Basic System. Dec 12 19:40:44.347294 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 19:40:44.380541 systemd-fsck[852]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 12 19:40:44.383486 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 19:40:44.388220 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 19:40:44.519134 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 19:40:44.520814 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 19:40:44.522285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 19:40:44.524819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 19:40:44.526695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 19:40:44.529336 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 19:40:44.531287 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 19:40:44.534309 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 19:40:44.534358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 19:40:44.544342 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 19:40:44.548266 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 19:40:44.564143 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Dec 12 19:40:44.569164 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 19:40:44.572118 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:40:44.592825 kernel: BTRFS info (device vda6): turning on async discard Dec 12 19:40:44.592941 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 19:40:44.596046 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 19:40:44.638135 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:40:44.640819 initrd-setup-root[889]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 19:40:44.649690 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Dec 12 19:40:44.656881 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 19:40:44.664667 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 19:40:44.779922 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 19:40:44.782893 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 19:40:44.785313 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 19:40:44.809129 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 19:40:44.831765 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 19:40:44.832502 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 19:40:44.851047 ignition[979]: INFO : Ignition 2.22.0 Dec 12 19:40:44.853234 ignition[979]: INFO : Stage: mount Dec 12 19:40:44.853234 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:44.853234 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:44.856854 ignition[979]: INFO : mount: mount passed Dec 12 19:40:44.856854 ignition[979]: INFO : Ignition finished successfully Dec 12 19:40:44.856497 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 19:40:45.422657 systemd-networkd[823]: eth0: Gained IPv6LL Dec 12 19:40:45.670142 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:40:46.932117 systemd-networkd[823]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:53d:24:19ff:fef4:14f6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:53d:24:19ff:fef4:14f6/64 assigned by NDisc. Dec 12 19:40:46.932131 systemd-networkd[823]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 12 19:40:47.693199 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:40:51.707130 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:40:51.713846 coreos-metadata[862]: Dec 12 19:40:51.713 WARN failed to locate config-drive, using the metadata service API instead Dec 12 19:40:51.738751 coreos-metadata[862]: Dec 12 19:40:51.738 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 19:40:51.752263 coreos-metadata[862]: Dec 12 19:40:51.752 INFO Fetch successful Dec 12 19:40:51.753212 coreos-metadata[862]: Dec 12 19:40:51.753 INFO wrote hostname srv-tupcq.gb1.brightbox.com to /sysroot/etc/hostname Dec 12 19:40:51.755866 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 19:40:51.757381 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 19:40:51.762952 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 19:40:51.789302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 19:40:51.829122 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Dec 12 19:40:51.829211 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 19:40:51.831665 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:40:51.839124 kernel: BTRFS info (device vda6): turning on async discard Dec 12 19:40:51.839208 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 19:40:51.843963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 19:40:51.909330 ignition[1012]: INFO : Ignition 2.22.0 Dec 12 19:40:51.909330 ignition[1012]: INFO : Stage: files Dec 12 19:40:51.911659 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:51.911659 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:51.911659 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Dec 12 19:40:51.916406 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 19:40:51.916406 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 19:40:51.924689 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 19:40:51.924689 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 19:40:51.924689 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 19:40:51.921931 unknown[1012]: wrote ssh authorized keys file for user: core Dec 12 19:40:51.928840 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 19:40:51.928840 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 19:40:52.104855 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 19:40:52.395358 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 19:40:52.395358 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 19:40:52.398221 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 19:40:52.406261 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 19:40:52.406261 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 19:40:52.406261 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 19:40:52.406261 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 19:40:52.406261 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 19:40:52.406261 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 19:40:52.857441 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 19:40:54.760499 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 19:40:54.760499 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 19:40:54.765303 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 19:40:54.765303 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 19:40:54.765303 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 19:40:54.765303 ignition[1012]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 19:40:54.765303 ignition[1012]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 19:40:54.775026 ignition[1012]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 19:40:54.775026 ignition[1012]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 19:40:54.775026 ignition[1012]: INFO : files: files passed Dec 12 19:40:54.775026 ignition[1012]: INFO : Ignition finished successfully Dec 12 19:40:54.771301 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 19:40:54.776350 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 19:40:54.782313 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 19:40:54.807701 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 19:40:54.808640 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 19:40:54.817183 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 19:40:54.817183 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 19:40:54.820176 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 19:40:54.821477 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 19:40:54.822909 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 19:40:54.825238 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 19:40:54.888391 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 19:40:54.888581 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 19:40:54.890746 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 19:40:54.891825 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 19:40:54.893497 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 19:40:54.896258 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 19:40:54.923507 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 19:40:54.927687 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 19:40:54.952383 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 19:40:54.954355 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 19:40:54.956211 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 19:40:54.957777 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 19:40:54.957993 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 19:40:54.961016 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 19:40:54.961874 systemd[1]: Stopped target basic.target - Basic System. Dec 12 19:40:54.963297 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 19:40:54.964825 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 19:40:54.966414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 19:40:54.968034 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 19:40:54.969698 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 19:40:54.971188 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 19:40:54.972778 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 19:40:54.974348 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 19:40:54.975891 systemd[1]: Stopped target swap.target - Swaps. Dec 12 19:40:54.977106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 19:40:54.977338 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 19:40:54.979046 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 19:40:54.980051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 19:40:54.981602 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 19:40:54.983189 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 19:40:54.984420 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 19:40:54.984668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 19:40:54.986591 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 19:40:54.986859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 19:40:54.988622 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 19:40:54.988882 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 19:40:54.998358 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 19:40:55.001370 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 19:40:55.002060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 19:40:55.003353 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 19:40:55.007051 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 19:40:55.007430 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 19:40:55.016727 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 19:40:55.019146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 19:40:55.036667 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 19:40:55.039576 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 19:40:55.039805 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 19:40:55.055132 ignition[1065]: INFO : Ignition 2.22.0 Dec 12 19:40:55.055132 ignition[1065]: INFO : Stage: umount Dec 12 19:40:55.055132 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 19:40:55.055132 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:40:55.059439 ignition[1065]: INFO : umount: umount passed Dec 12 19:40:55.059439 ignition[1065]: INFO : Ignition finished successfully Dec 12 19:40:55.059558 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 19:40:55.059745 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 19:40:55.061436 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 19:40:55.061588 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 19:40:55.063060 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 19:40:55.063207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 19:40:55.064399 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 19:40:55.064473 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 19:40:55.065817 systemd[1]: Stopped target network.target - Network. Dec 12 19:40:55.067192 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 19:40:55.067290 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 19:40:55.068695 systemd[1]: Stopped target paths.target - Path Units. Dec 12 19:40:55.070007 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 19:40:55.073184 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 19:40:55.074902 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 19:40:55.076266 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 19:40:55.077808 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 19:40:55.077897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 19:40:55.079433 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 19:40:55.079497 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 19:40:55.080752 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 19:40:55.080863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 19:40:55.082159 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 19:40:55.082234 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 19:40:55.083555 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 19:40:55.083640 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 19:40:55.085351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 19:40:55.087920 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 19:40:55.092265 systemd-networkd[823]: eth0: DHCPv6 lease lost Dec 12 19:40:55.096770 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 19:40:55.097004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 19:40:55.102359 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 19:40:55.102703 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 19:40:55.102886 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 19:40:55.105297 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 19:40:55.105967 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 19:40:55.107348 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 19:40:55.107439 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 19:40:55.110135 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 19:40:55.112440 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 19:40:55.112513 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 19:40:55.114207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 19:40:55.114274 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 19:40:55.117917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 19:40:55.117990 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 19:40:55.119026 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 19:40:55.119107 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 19:40:55.120282 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 19:40:55.123067 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 19:40:55.123284 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 19:40:55.133724 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 19:40:55.134801 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 19:40:55.137346 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 19:40:55.137584 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 19:40:55.139023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 19:40:55.139082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 19:40:55.140717 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 19:40:55.140808 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 19:40:55.143003 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 19:40:55.143075 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 19:40:55.146211 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 19:40:55.146289 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 19:40:55.148690 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 19:40:55.150467 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 19:40:55.150541 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 19:40:55.155127 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 19:40:55.155201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 19:40:55.157235 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 19:40:55.157308 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 19:40:55.158975 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 19:40:55.159042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 19:40:55.160004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 19:40:55.160106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:40:55.170225 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 19:40:55.170309 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 19:40:55.170380 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 19:40:55.170456 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 19:40:55.171188 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 19:40:55.171362 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 19:40:55.173343 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 19:40:55.173512 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 19:40:55.175862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 19:40:55.178131 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 19:40:55.199317 systemd[1]: Switching root. Dec 12 19:40:55.230921 systemd-journald[210]: Journal stopped Dec 12 19:40:56.932663 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Dec 12 19:40:56.932811 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 19:40:56.932852 kernel: SELinux: policy capability open_perms=1 Dec 12 19:40:56.932893 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 19:40:56.932915 kernel: SELinux: policy capability always_check_network=0 Dec 12 19:40:56.932941 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 19:40:56.932960 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 19:40:56.932978 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 19:40:56.932997 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 19:40:56.933023 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 19:40:56.933044 kernel: audit: type=1403 audit(1765568455.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 19:40:56.933072 systemd[1]: Successfully loaded SELinux policy in 76.405ms. Dec 12 19:40:56.937216 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.346ms. Dec 12 19:40:56.937257 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 19:40:56.937280 systemd[1]: Detected virtualization kvm. Dec 12 19:40:56.937310 systemd[1]: Detected architecture x86-64. Dec 12 19:40:56.937331 systemd[1]: Detected first boot. Dec 12 19:40:56.937352 systemd[1]: Hostname set to . Dec 12 19:40:56.937372 systemd[1]: Initializing machine ID from VM UUID. Dec 12 19:40:56.937393 zram_generator::config[1108]: No configuration found. Dec 12 19:40:56.937436 kernel: Guest personality initialized and is inactive Dec 12 19:40:56.937462 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 19:40:56.937480 kernel: Initialized host personality Dec 12 19:40:56.937499 kernel: NET: Registered PF_VSOCK protocol family Dec 12 19:40:56.937527 systemd[1]: Populated /etc with preset unit settings. Dec 12 19:40:56.937550 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 19:40:56.937572 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 19:40:56.937592 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 19:40:56.937612 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 19:40:56.937645 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 19:40:56.937675 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 19:40:56.937707 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 19:40:56.937729 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 19:40:56.937763 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 19:40:56.937808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 19:40:56.937850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 19:40:56.937874 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 19:40:56.937901 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 19:40:56.937923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 19:40:56.937944 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 19:40:56.937964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 19:40:56.938002 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 19:40:56.938025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 19:40:56.938045 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 19:40:56.938065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 19:40:56.949176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 19:40:56.949262 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 19:40:56.949286 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 19:40:56.949308 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 19:40:56.949353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 19:40:56.949376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 19:40:56.949397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 19:40:56.949417 systemd[1]: Reached target slices.target - Slice Units. Dec 12 19:40:56.949438 systemd[1]: Reached target swap.target - Swaps. Dec 12 19:40:56.949459 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 19:40:56.949480 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 19:40:56.949502 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 19:40:56.949523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 19:40:56.949544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 19:40:56.949578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 19:40:56.949601 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 19:40:56.949622 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 19:40:56.949642 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 19:40:56.949662 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 19:40:56.949683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:56.949703 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 19:40:56.949732 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 19:40:56.949776 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 19:40:56.949801 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 19:40:56.949823 systemd[1]: Reached target machines.target - Containers. Dec 12 19:40:56.949845 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 19:40:56.949866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:40:56.949886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 19:40:56.949913 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 19:40:56.949934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 19:40:56.949954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 19:40:56.949988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 19:40:56.950011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 19:40:56.950031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 19:40:56.950052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 19:40:56.950072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 19:40:56.951677 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 19:40:56.951710 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 19:40:56.951732 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 19:40:56.951782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:40:56.951809 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 19:40:56.951844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 19:40:56.951868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 19:40:56.951889 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 19:40:56.951922 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 19:40:56.951946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 19:40:56.951982 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 19:40:56.952005 systemd[1]: Stopped verity-setup.service. Dec 12 19:40:56.952039 kernel: fuse: init (API version 7.41) Dec 12 19:40:56.952064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:56.952105 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 19:40:56.952130 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 19:40:56.952151 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 19:40:56.952172 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 19:40:56.952192 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 19:40:56.952213 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 19:40:56.952232 kernel: loop: module loaded Dec 12 19:40:56.952267 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 19:40:56.952290 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 19:40:56.952311 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 19:40:56.952332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 19:40:56.952352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 19:40:56.952372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 19:40:56.952392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 19:40:56.952412 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 19:40:56.952432 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 19:40:56.952467 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 19:40:56.952489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 19:40:56.952573 systemd-journald[1198]: Collecting audit messages is disabled. Dec 12 19:40:56.952619 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 19:40:56.952641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 19:40:56.952663 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 19:40:56.952683 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 19:40:56.952720 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 19:40:56.952764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 19:40:56.952789 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 19:40:56.952810 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 19:40:56.952831 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 19:40:56.952852 systemd-journald[1198]: Journal started Dec 12 19:40:56.952888 systemd-journald[1198]: Runtime Journal (/run/log/journal/97f9087a9ca54f77a016ee35a174b836) is 4.7M, max 37.8M, 33.1M free. Dec 12 19:40:56.496292 systemd[1]: Queued start job for default target multi-user.target. Dec 12 19:40:56.509560 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 19:40:56.510346 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 19:40:56.967181 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 19:40:56.972119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:40:56.979134 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 19:40:56.979217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 19:40:56.989139 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 19:40:56.993885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 19:40:56.993966 kernel: ACPI: bus type drm_connector registered Dec 12 19:40:57.001477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 19:40:57.008169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 19:40:57.019122 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 19:40:57.027118 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 19:40:57.029043 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 19:40:57.030291 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 19:40:57.030636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 19:40:57.040354 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 19:40:57.045420 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 19:40:57.046958 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 19:40:57.068745 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 19:40:57.078700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 19:40:57.103139 kernel: loop0: detected capacity change from 0 to 110984 Dec 12 19:40:57.103710 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Dec 12 19:40:57.103738 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Dec 12 19:40:57.118202 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 19:40:57.124294 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 19:40:57.131004 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 19:40:57.143413 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 19:40:57.142730 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 19:40:57.158159 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 19:40:57.179271 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 19:40:57.205292 systemd-journald[1198]: Time spent on flushing to /var/log/journal/97f9087a9ca54f77a016ee35a174b836 is 85.779ms for 1177 entries. Dec 12 19:40:57.205292 systemd-journald[1198]: System Journal (/var/log/journal/97f9087a9ca54f77a016ee35a174b836) is 8M, max 584.8M, 576.8M free. Dec 12 19:40:57.312571 systemd-journald[1198]: Received client request to flush runtime journal. Dec 12 19:40:57.312637 kernel: loop2: detected capacity change from 0 to 8 Dec 12 19:40:57.313593 kernel: loop3: detected capacity change from 0 to 224512 Dec 12 19:40:57.212805 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 19:40:57.283187 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 19:40:57.291422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 19:40:57.318470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 19:40:57.379170 kernel: loop4: detected capacity change from 0 to 110984 Dec 12 19:40:57.402123 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 19:40:57.428164 kernel: loop6: detected capacity change from 0 to 8 Dec 12 19:40:57.426818 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 12 19:40:57.426839 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 12 19:40:57.440141 kernel: loop7: detected capacity change from 0 to 224512 Dec 12 19:40:57.446879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 19:40:57.488682 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 12 19:40:57.493593 (sd-merge)[1270]: Merged extensions into '/usr'. Dec 12 19:40:57.504363 systemd[1]: Reload requested from client PID 1227 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 19:40:57.504404 systemd[1]: Reloading... Dec 12 19:40:57.692114 zram_generator::config[1295]: No configuration found. Dec 12 19:40:57.837478 ldconfig[1223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 19:40:58.072581 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 19:40:58.073357 systemd[1]: Reloading finished in 568 ms. Dec 12 19:40:58.098021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 19:40:58.099821 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 19:40:58.101162 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 19:40:58.113502 systemd[1]: Starting ensure-sysext.service... Dec 12 19:40:58.118459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 19:40:58.148228 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 19:40:58.148917 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 19:40:58.149434 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 19:40:58.149874 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 19:40:58.151570 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 19:40:58.152137 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Dec 12 19:40:58.152363 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Dec 12 19:40:58.154251 systemd[1]: Reload requested from client PID 1355 ('systemctl') (unit ensure-sysext.service)... Dec 12 19:40:58.154277 systemd[1]: Reloading... Dec 12 19:40:58.158643 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 19:40:58.158778 systemd-tmpfiles[1356]: Skipping /boot Dec 12 19:40:58.173767 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 19:40:58.173918 systemd-tmpfiles[1356]: Skipping /boot Dec 12 19:40:58.250168 zram_generator::config[1386]: No configuration found. Dec 12 19:40:58.528283 systemd[1]: Reloading finished in 373 ms. Dec 12 19:40:58.557995 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 19:40:58.580823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 19:40:58.592962 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 19:40:58.598392 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 19:40:58.608489 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 19:40:58.612365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 19:40:58.617974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 19:40:58.625705 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 19:40:58.631014 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:58.632758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:40:58.637590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 19:40:58.650393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 19:40:58.659692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 19:40:58.660616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:40:58.660817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:40:58.660971 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:58.667528 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 19:40:58.672459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:58.672799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:40:58.673062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:40:58.673233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:40:58.673391 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:58.683566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 19:40:58.684537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 19:40:58.690975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:58.692469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:40:58.702573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 19:40:58.704287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:40:58.704542 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:40:58.705232 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 19:40:58.705483 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:40:58.717643 systemd[1]: Finished ensure-sysext.service. Dec 12 19:40:58.722344 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 19:40:58.724793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 19:40:58.727621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 19:40:58.738558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 19:40:58.756587 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 19:40:58.757512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 19:40:58.760224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 19:40:58.774750 augenrules[1477]: No rules Dec 12 19:40:58.776619 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 19:40:58.778229 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 19:40:58.781397 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 19:40:58.783318 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 19:40:58.783906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 19:40:58.790960 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 19:40:58.805650 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 19:40:58.806984 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 19:40:58.809926 systemd-udevd[1446]: Using default interface naming scheme 'v255'. Dec 12 19:40:58.823272 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 19:40:58.826390 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 19:40:58.878766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 19:40:58.888405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 19:40:58.983014 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 19:40:58.984337 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 19:40:59.055714 systemd-resolved[1444]: Positive Trust Anchors: Dec 12 19:40:59.055751 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 19:40:59.055797 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 19:40:59.067507 systemd-resolved[1444]: Using system hostname 'srv-tupcq.gb1.brightbox.com'. Dec 12 19:40:59.071794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 19:40:59.082948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 19:40:59.085303 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 19:40:59.087399 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 19:40:59.089278 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 19:40:59.090115 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 19:40:59.092665 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 19:40:59.094294 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 19:40:59.096204 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 19:40:59.097116 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 19:40:59.097190 systemd[1]: Reached target paths.target - Path Units. Dec 12 19:40:59.099232 systemd[1]: Reached target timers.target - Timer Units. Dec 12 19:40:59.103427 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 19:40:59.108143 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 19:40:59.117308 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 19:40:59.118375 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 19:40:59.120189 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 19:40:59.129016 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 19:40:59.131806 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 19:40:59.133621 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 19:40:59.135959 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 19:40:59.136627 systemd[1]: Reached target basic.target - Basic System. Dec 12 19:40:59.137443 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 19:40:59.137511 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 19:40:59.139676 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 19:40:59.147249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 19:40:59.150881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 19:40:59.159277 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 19:40:59.164651 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 19:40:59.167174 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 19:40:59.174355 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 19:40:59.180053 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 19:40:59.193396 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 19:40:59.208194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 19:40:59.220432 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 19:40:59.229476 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 19:40:59.233242 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 19:40:59.236135 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:40:59.237038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 19:40:59.241782 systemd-networkd[1501]: lo: Link UP Dec 12 19:40:59.241796 systemd-networkd[1501]: lo: Gained carrier Dec 12 19:40:59.246564 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 19:40:59.253334 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 19:40:59.258134 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 19:40:59.259832 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 19:40:59.262322 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 19:40:59.265607 jq[1525]: false Dec 12 19:40:59.266377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 19:40:59.267159 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 19:40:59.276079 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 19:40:59.279710 systemd-networkd[1501]: Enumeration completed Dec 12 19:40:59.280214 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 19:40:59.282618 systemd[1]: Reached target network.target - Network. Dec 12 19:40:59.290400 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 19:40:59.298321 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 19:40:59.305413 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 19:40:59.314974 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Dec 12 19:40:59.314998 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Dec 12 19:40:59.319106 update_engine[1535]: I20251212 19:40:59.316875 1535 main.cc:92] Flatcar Update Engine starting Dec 12 19:40:59.319525 jq[1536]: true Dec 12 19:40:59.357877 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Dec 12 19:40:59.357877 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 19:40:59.357877 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Dec 12 19:40:59.352484 oslogin_cache_refresh[1527]: Failure getting users, quitting Dec 12 19:40:59.352521 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 19:40:59.352641 oslogin_cache_refresh[1527]: Refreshing group entry cache Dec 12 19:40:59.375225 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Dec 12 19:40:59.375225 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 19:40:59.373548 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 19:40:59.370621 oslogin_cache_refresh[1527]: Failure getting groups, quitting Dec 12 19:40:59.370644 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 19:40:59.375640 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 19:40:59.380662 dbus-daemon[1523]: [system] SELinux support is enabled Dec 12 19:40:59.380921 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 19:40:59.386063 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 19:40:59.386139 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 19:40:59.386977 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 19:40:59.387003 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 19:40:59.400920 systemd[1]: Started update-engine.service - Update Engine. Dec 12 19:40:59.402053 update_engine[1535]: I20251212 19:40:59.401962 1535 update_check_scheduler.cc:74] Next update check in 5m41s Dec 12 19:40:59.410435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 19:40:59.411814 jq[1554]: true Dec 12 19:40:59.419167 tar[1546]: linux-amd64/LICENSE Dec 12 19:40:59.419167 tar[1546]: linux-amd64/helm Dec 12 19:40:59.424449 extend-filesystems[1526]: Found /dev/vda6 Dec 12 19:40:59.426063 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 19:40:59.437745 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 19:40:59.438128 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 19:40:59.444736 extend-filesystems[1526]: Found /dev/vda9 Dec 12 19:40:59.453933 extend-filesystems[1526]: Checking size of /dev/vda9 Dec 12 19:40:59.457672 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 19:40:59.503222 extend-filesystems[1526]: Resized partition /dev/vda9 Dec 12 19:40:59.517140 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 19:40:59.552663 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 12 19:40:59.724140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 19:40:59.730762 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Dec 12 19:40:59.744186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 19:40:59.748436 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 19:40:59.753288 systemd[1]: Starting sshkeys.service... Dec 12 19:40:59.802296 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 19:40:59.802313 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 19:40:59.806020 systemd-networkd[1501]: eth0: Link UP Dec 12 19:40:59.806750 systemd-networkd[1501]: eth0: Gained carrier Dec 12 19:40:59.806772 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 19:40:59.830125 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 19:40:59.832915 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 19:40:59.845445 dbus-daemon[1523]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1501 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 19:40:59.846172 systemd-networkd[1501]: eth0: DHCPv4 address 10.244.20.246/30, gateway 10.244.20.245 acquired from 10.244.20.245 Dec 12 19:40:59.851635 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 19:40:59.856264 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Dec 12 19:40:59.874926 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:40:59.880956 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 19:40:59.894485 systemd-logind[1534]: New seat seat0. Dec 12 19:40:59.908145 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 19:40:59.948538 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 12 19:40:59.984955 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 19:40:59.984955 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 12 19:40:59.984955 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 12 19:40:59.984484 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 19:40:59.996741 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Dec 12 19:40:59.986066 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 19:41:00.101216 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 19:41:00.110646 containerd[1566]: time="2025-12-12T19:41:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 19:41:00.116726 containerd[1566]: time="2025-12-12T19:41:00.116655578Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 19:41:00.121131 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 19:41:00.160587 containerd[1566]: time="2025-12-12T19:41:00.160515900Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.071µs" Dec 12 19:41:00.160587 containerd[1566]: time="2025-12-12T19:41:00.160573184Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 19:41:00.160770 containerd[1566]: time="2025-12-12T19:41:00.160605028Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 19:41:00.160949 containerd[1566]: time="2025-12-12T19:41:00.160909718Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 19:41:00.161010 containerd[1566]: time="2025-12-12T19:41:00.160954099Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 19:41:00.161083 containerd[1566]: time="2025-12-12T19:41:00.161008546Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 19:41:00.162592 containerd[1566]: time="2025-12-12T19:41:00.161159062Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 19:41:00.162592 containerd[1566]: time="2025-12-12T19:41:00.162587688Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 19:41:00.162966 containerd[1566]: time="2025-12-12T19:41:00.162931500Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 19:41:00.162966 containerd[1566]: time="2025-12-12T19:41:00.162960513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 19:41:00.163067 containerd[1566]: time="2025-12-12T19:41:00.162979685Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 19:41:00.163067 containerd[1566]: time="2025-12-12T19:41:00.162994700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 19:41:00.169241 containerd[1566]: time="2025-12-12T19:41:00.169207569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 19:41:00.169735 containerd[1566]: time="2025-12-12T19:41:00.169674144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 19:41:00.169820 containerd[1566]: time="2025-12-12T19:41:00.169757753Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 19:41:00.169820 containerd[1566]: time="2025-12-12T19:41:00.169784462Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 19:41:00.170120 containerd[1566]: time="2025-12-12T19:41:00.169846901Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 19:41:00.170608 containerd[1566]: time="2025-12-12T19:41:00.170208610Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 19:41:00.170608 containerd[1566]: time="2025-12-12T19:41:00.170316995Z" level=info msg="metadata content store policy set" policy=shared Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184502566Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184587629Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184614159Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184634342Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184671445Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184730352Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184758771Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184800798Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184840731Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184862358Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184878949Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 19:41:00.184953 containerd[1566]: time="2025-12-12T19:41:00.184901107Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185161069Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185210278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185235651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185256192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185273926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185292187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185310389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185327707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185361206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185381848Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 19:41:00.185520 containerd[1566]: time="2025-12-12T19:41:00.185399782Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 19:41:00.185955 containerd[1566]: time="2025-12-12T19:41:00.185562131Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 19:41:00.185955 containerd[1566]: time="2025-12-12T19:41:00.185597507Z" level=info msg="Start snapshots syncer" Dec 12 19:41:00.185955 containerd[1566]: time="2025-12-12T19:41:00.185642156Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 19:41:00.190858 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 19:41:00.192760 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 19:41:00.195050 containerd[1566]: time="2025-12-12T19:41:00.186054185Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 19:41:00.195341 containerd[1566]: time="2025-12-12T19:41:00.195105248Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 19:41:00.200905 containerd[1566]: time="2025-12-12T19:41:00.200782717Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 19:41:00.201010 containerd[1566]: time="2025-12-12T19:41:00.200981577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 19:41:00.201060 containerd[1566]: time="2025-12-12T19:41:00.201021812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 19:41:00.201060 containerd[1566]: time="2025-12-12T19:41:00.201042461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 19:41:00.205213 containerd[1566]: time="2025-12-12T19:41:00.201059701Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 19:41:00.201463 dbus-daemon[1523]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1609 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210550476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210602719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210636014Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210726757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210752420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210772339Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210862572Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210893703Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210908804Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210925379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210939459Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210955124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.210981699Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 19:41:00.211044 containerd[1566]: time="2025-12-12T19:41:00.211028578Z" level=info msg="runtime interface created" Dec 12 19:41:00.210641 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 19:41:00.212414 containerd[1566]: time="2025-12-12T19:41:00.211040397Z" level=info msg="created NRI interface" Dec 12 19:41:00.212414 containerd[1566]: time="2025-12-12T19:41:00.211055178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 19:41:00.212414 containerd[1566]: time="2025-12-12T19:41:00.211080781Z" level=info msg="Connect containerd service" Dec 12 19:41:00.212414 containerd[1566]: time="2025-12-12T19:41:00.211133164Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 19:41:00.221435 containerd[1566]: time="2025-12-12T19:41:00.221389175Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 19:41:00.271131 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 12 19:41:00.274363 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 19:41:00.341117 kernel: ACPI: button: Power Button [PWRF] Dec 12 19:41:00.348039 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 19:41:00.405126 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 19:41:00.412453 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 19:41:00.418606 systemd[1]: Started sshd@0-10.244.20.246:22-147.75.109.163:40786.service - OpenSSH per-connection server daemon (147.75.109.163:40786). Dec 12 19:41:00.497082 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 19:41:00.497677 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 19:41:00.504462 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 19:41:00.533603 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 19:41:00.538676 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 19:41:00.542806 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 19:41:00.545527 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551238804Z" level=info msg="Start subscribing containerd event" Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551339016Z" level=info msg="Start recovering state" Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551518547Z" level=info msg="Start event monitor" Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551542971Z" level=info msg="Start cni network conf syncer for default" Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551560914Z" level=info msg="Start streaming server" Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551585060Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551602089Z" level=info msg="runtime interface starting up..." Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551616395Z" level=info msg="starting plugins..." Dec 12 19:41:00.551847 containerd[1566]: time="2025-12-12T19:41:00.551661079Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 19:41:00.560454 containerd[1566]: time="2025-12-12T19:41:00.558392368Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 19:41:00.560454 containerd[1566]: time="2025-12-12T19:41:00.558580724Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 19:41:00.559484 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 19:41:00.564121 containerd[1566]: time="2025-12-12T19:41:00.564056319Z" level=info msg="containerd successfully booted in 0.454930s" Dec 12 19:41:00.578863 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 19:41:00.579416 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 19:41:00.629488 polkitd[1621]: Started polkitd version 126 Dec 12 19:41:00.653025 polkitd[1621]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 19:41:00.666194 polkitd[1621]: Loading rules from directory /run/polkit-1/rules.d Dec 12 19:41:00.666306 polkitd[1621]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 19:41:00.666656 polkitd[1621]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 19:41:00.666710 polkitd[1621]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 19:41:00.666771 polkitd[1621]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 19:41:00.676417 polkitd[1621]: Finished loading, compiling and executing 2 rules Dec 12 19:41:00.678826 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 19:41:00.681875 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 19:41:00.682515 polkitd[1621]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 19:41:00.723504 systemd-hostnamed[1609]: Hostname set to (static) Dec 12 19:41:00.786844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:41:00.815926 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 19:41:00.906605 tar[1546]: linux-amd64/README.md Dec 12 19:41:00.932185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 19:41:00.991808 systemd-logind[1534]: Watching system buttons on /dev/input/event3 (Power Button) Dec 12 19:41:01.294554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:41:01.480591 sshd[1643]: Accepted publickey for core from 147.75.109.163 port 40786 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:01.483504 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:01.506123 systemd-logind[1534]: New session 1 of user core. Dec 12 19:41:01.509461 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 19:41:01.516158 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 19:41:01.553992 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 19:41:01.558973 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 19:41:01.580451 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 19:41:01.584891 systemd-logind[1534]: New session c1 of user core. Dec 12 19:41:01.742525 systemd-networkd[1501]: eth0: Gained IPv6LL Dec 12 19:41:01.746362 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Dec 12 19:41:01.750893 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 19:41:01.753828 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 19:41:01.762442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:01.768195 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 19:41:01.796076 systemd[1685]: Queued start job for default target default.target. Dec 12 19:41:01.798281 systemd[1685]: Created slice app.slice - User Application Slice. Dec 12 19:41:01.798325 systemd[1685]: Reached target paths.target - Paths. Dec 12 19:41:01.798402 systemd[1685]: Reached target timers.target - Timers. Dec 12 19:41:01.800981 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 19:41:01.822555 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 19:41:01.823626 systemd[1685]: Reached target sockets.target - Sockets. Dec 12 19:41:01.823707 systemd[1685]: Reached target basic.target - Basic System. Dec 12 19:41:01.823784 systemd[1685]: Reached target default.target - Main User Target. Dec 12 19:41:01.823849 systemd[1685]: Startup finished in 229ms. Dec 12 19:41:01.825205 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 19:41:01.833447 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 19:41:01.849653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 19:41:02.194632 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:41:02.194743 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:41:02.485903 systemd[1]: Started sshd@1-10.244.20.246:22-147.75.109.163:40790.service - OpenSSH per-connection server daemon (147.75.109.163:40790). Dec 12 19:41:02.844736 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Dec 12 19:41:02.847237 systemd-networkd[1501]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:53d:24:19ff:fef4:14f6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:53d:24:19ff:fef4:14f6/64 assigned by NDisc. Dec 12 19:41:02.847249 systemd-networkd[1501]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 12 19:41:02.895350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:02.907651 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:41:03.409127 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 40790 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:03.408885 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:03.422717 systemd-logind[1534]: New session 2 of user core. Dec 12 19:41:03.428981 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 19:41:03.566178 kubelet[1719]: E1212 19:41:03.566073 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:41:03.569783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:41:03.570505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:41:03.571711 systemd[1]: kubelet.service: Consumed 1.126s CPU time, 266M memory peak. Dec 12 19:41:04.031174 sshd[1724]: Connection closed by 147.75.109.163 port 40790 Dec 12 19:41:04.032253 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:04.037562 systemd[1]: sshd@1-10.244.20.246:22-147.75.109.163:40790.service: Deactivated successfully. Dec 12 19:41:04.040418 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 19:41:04.043584 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Dec 12 19:41:04.045129 systemd-logind[1534]: Removed session 2. Dec 12 19:41:04.188348 systemd[1]: Started sshd@2-10.244.20.246:22-147.75.109.163:34412.service - OpenSSH per-connection server daemon (147.75.109.163:34412). Dec 12 19:41:04.216110 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:41:04.227134 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:41:04.751305 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Dec 12 19:41:05.124754 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 34412 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:05.126575 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:05.133290 systemd-logind[1534]: New session 3 of user core. Dec 12 19:41:05.142466 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 19:41:05.639592 login[1661]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 19:41:05.658262 systemd-logind[1534]: New session 4 of user core. Dec 12 19:41:05.659843 login[1660]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 19:41:05.670962 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 19:41:05.681146 systemd-logind[1534]: New session 5 of user core. Dec 12 19:41:05.692629 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 19:41:05.780306 sshd[1736]: Connection closed by 147.75.109.163 port 34412 Dec 12 19:41:05.781350 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:05.786691 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Dec 12 19:41:05.787301 systemd[1]: sshd@2-10.244.20.246:22-147.75.109.163:34412.service: Deactivated successfully. Dec 12 19:41:05.789951 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 19:41:05.793018 systemd-logind[1534]: Removed session 3. Dec 12 19:41:08.236135 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:41:08.250116 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:41:08.250270 coreos-metadata[1522]: Dec 12 19:41:08.248 WARN failed to locate config-drive, using the metadata service API instead Dec 12 19:41:08.262626 coreos-metadata[1607]: Dec 12 19:41:08.262 WARN failed to locate config-drive, using the metadata service API instead Dec 12 19:41:08.277415 coreos-metadata[1522]: Dec 12 19:41:08.276 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 19:41:08.286934 coreos-metadata[1522]: Dec 12 19:41:08.286 INFO Fetch failed with 404: resource not found Dec 12 19:41:08.287329 coreos-metadata[1522]: Dec 12 19:41:08.287 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 19:41:08.287501 coreos-metadata[1607]: Dec 12 19:41:08.287 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 19:41:08.287842 coreos-metadata[1522]: Dec 12 19:41:08.287 INFO Fetch successful Dec 12 19:41:08.288200 coreos-metadata[1522]: Dec 12 19:41:08.288 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 19:41:08.302674 coreos-metadata[1522]: Dec 12 19:41:08.302 INFO Fetch successful Dec 12 19:41:08.303244 coreos-metadata[1522]: Dec 12 19:41:08.303 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 19:41:08.313352 coreos-metadata[1607]: Dec 12 19:41:08.313 INFO Fetch successful Dec 12 19:41:08.313549 coreos-metadata[1607]: Dec 12 19:41:08.313 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 19:41:08.321222 coreos-metadata[1522]: Dec 12 19:41:08.321 INFO Fetch successful Dec 12 19:41:08.321592 coreos-metadata[1522]: Dec 12 19:41:08.321 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 19:41:08.335836 coreos-metadata[1522]: Dec 12 19:41:08.335 INFO Fetch successful Dec 12 19:41:08.336349 coreos-metadata[1522]: Dec 12 19:41:08.336 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 19:41:08.347023 coreos-metadata[1607]: Dec 12 19:41:08.346 INFO Fetch successful Dec 12 19:41:08.351856 unknown[1607]: wrote ssh authorized keys file for user: core Dec 12 19:41:08.367457 coreos-metadata[1522]: Dec 12 19:41:08.356 INFO Fetch successful Dec 12 19:41:08.393580 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 19:41:08.394893 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 19:41:08.401937 update-ssh-keys[1773]: Updated "/home/core/.ssh/authorized_keys" Dec 12 19:41:08.403535 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 19:41:08.405970 systemd[1]: Finished sshkeys.service. Dec 12 19:41:08.410320 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 19:41:08.410679 systemd[1]: Startup finished in 3.589s (kernel) + 14.984s (initrd) + 12.826s (userspace) = 31.399s. Dec 12 19:41:13.820908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 19:41:13.823941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:14.046645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:14.063933 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:41:14.138443 kubelet[1787]: E1212 19:41:14.138258 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:41:14.143787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:41:14.144050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:41:14.145032 systemd[1]: kubelet.service: Consumed 253ms CPU time, 109.5M memory peak. Dec 12 19:41:15.936987 systemd[1]: Started sshd@3-10.244.20.246:22-147.75.109.163:44178.service - OpenSSH per-connection server daemon (147.75.109.163:44178). Dec 12 19:41:16.897023 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 44178 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:16.897854 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:16.906141 systemd-logind[1534]: New session 6 of user core. Dec 12 19:41:16.916367 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 19:41:17.517200 sshd[1799]: Connection closed by 147.75.109.163 port 44178 Dec 12 19:41:17.518272 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:17.524277 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Dec 12 19:41:17.525035 systemd[1]: sshd@3-10.244.20.246:22-147.75.109.163:44178.service: Deactivated successfully. Dec 12 19:41:17.527896 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 19:41:17.531295 systemd-logind[1534]: Removed session 6. Dec 12 19:41:17.675688 systemd[1]: Started sshd@4-10.244.20.246:22-147.75.109.163:44184.service - OpenSSH per-connection server daemon (147.75.109.163:44184). Dec 12 19:41:18.605549 sshd[1805]: Accepted publickey for core from 147.75.109.163 port 44184 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:18.607445 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:18.615162 systemd-logind[1534]: New session 7 of user core. Dec 12 19:41:18.626431 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 19:41:19.229138 sshd[1808]: Connection closed by 147.75.109.163 port 44184 Dec 12 19:41:19.228524 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:19.234431 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Dec 12 19:41:19.234681 systemd[1]: sshd@4-10.244.20.246:22-147.75.109.163:44184.service: Deactivated successfully. Dec 12 19:41:19.237313 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 19:41:19.239983 systemd-logind[1534]: Removed session 7. Dec 12 19:41:19.383937 systemd[1]: Started sshd@5-10.244.20.246:22-147.75.109.163:44192.service - OpenSSH per-connection server daemon (147.75.109.163:44192). Dec 12 19:41:19.769714 systemd[1]: Started sshd@6-10.244.20.246:22-157.245.76.79:35290.service - OpenSSH per-connection server daemon (157.245.76.79:35290). Dec 12 19:41:19.885057 sshd[1818]: Invalid user webmaster from 157.245.76.79 port 35290 Dec 12 19:41:19.900208 sshd[1818]: Connection closed by invalid user webmaster 157.245.76.79 port 35290 [preauth] Dec 12 19:41:19.902977 systemd[1]: sshd@6-10.244.20.246:22-157.245.76.79:35290.service: Deactivated successfully. Dec 12 19:41:20.293937 sshd[1814]: Accepted publickey for core from 147.75.109.163 port 44192 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:20.295808 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:20.303523 systemd-logind[1534]: New session 8 of user core. Dec 12 19:41:20.311331 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 19:41:20.920124 sshd[1823]: Connection closed by 147.75.109.163 port 44192 Dec 12 19:41:20.918916 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:20.925332 systemd[1]: sshd@5-10.244.20.246:22-147.75.109.163:44192.service: Deactivated successfully. Dec 12 19:41:20.927811 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 19:41:20.929143 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Dec 12 19:41:20.931170 systemd-logind[1534]: Removed session 8. Dec 12 19:41:21.082445 systemd[1]: Started sshd@7-10.244.20.246:22-147.75.109.163:44200.service - OpenSSH per-connection server daemon (147.75.109.163:44200). Dec 12 19:41:22.021238 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 44200 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:22.023177 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:22.032169 systemd-logind[1534]: New session 9 of user core. Dec 12 19:41:22.043384 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 19:41:22.520634 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 19:41:22.521073 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:41:22.542234 sudo[1833]: pam_unix(sudo:session): session closed for user root Dec 12 19:41:22.688802 sshd[1832]: Connection closed by 147.75.109.163 port 44200 Dec 12 19:41:22.690152 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:22.696578 systemd[1]: sshd@7-10.244.20.246:22-147.75.109.163:44200.service: Deactivated successfully. Dec 12 19:41:22.699286 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 19:41:22.700521 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Dec 12 19:41:22.702603 systemd-logind[1534]: Removed session 9. Dec 12 19:41:22.853437 systemd[1]: Started sshd@8-10.244.20.246:22-147.75.109.163:56764.service - OpenSSH per-connection server daemon (147.75.109.163:56764). Dec 12 19:41:23.782829 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 56764 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:23.784789 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:23.791830 systemd-logind[1534]: New session 10 of user core. Dec 12 19:41:23.800360 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 19:41:24.266863 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 19:41:24.268252 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:41:24.270134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 19:41:24.272468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:24.279851 sudo[1844]: pam_unix(sudo:session): session closed for user root Dec 12 19:41:24.290052 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 19:41:24.290497 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:41:24.308183 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 19:41:24.366960 augenrules[1869]: No rules Dec 12 19:41:24.370144 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 19:41:24.371023 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 19:41:24.373355 sudo[1843]: pam_unix(sudo:session): session closed for user root Dec 12 19:41:24.473407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:24.487579 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:41:24.520161 sshd[1842]: Connection closed by 147.75.109.163 port 56764 Dec 12 19:41:24.519843 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Dec 12 19:41:24.527437 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Dec 12 19:41:24.528258 systemd[1]: sshd@8-10.244.20.246:22-147.75.109.163:56764.service: Deactivated successfully. Dec 12 19:41:24.532521 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 19:41:24.537771 systemd-logind[1534]: Removed session 10. Dec 12 19:41:24.557748 kubelet[1878]: E1212 19:41:24.557650 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:41:24.560960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:41:24.561246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:41:24.562245 systemd[1]: kubelet.service: Consumed 222ms CPU time, 110.5M memory peak. Dec 12 19:41:24.689515 systemd[1]: Started sshd@9-10.244.20.246:22-147.75.109.163:56768.service - OpenSSH per-connection server daemon (147.75.109.163:56768). Dec 12 19:41:25.620701 sshd[1889]: Accepted publickey for core from 147.75.109.163 port 56768 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:41:25.622623 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:41:25.629862 systemd-logind[1534]: New session 11 of user core. Dec 12 19:41:25.638446 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 19:41:26.103524 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 19:41:26.103968 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:41:26.645125 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 19:41:26.661957 (dockerd)[1910]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 19:41:27.042835 dockerd[1910]: time="2025-12-12T19:41:27.041842768Z" level=info msg="Starting up" Dec 12 19:41:27.049862 dockerd[1910]: time="2025-12-12T19:41:27.049824043Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 19:41:27.070509 dockerd[1910]: time="2025-12-12T19:41:27.070384394Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 19:41:27.095642 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3038113444-merged.mount: Deactivated successfully. Dec 12 19:41:27.133772 dockerd[1910]: time="2025-12-12T19:41:27.133410417Z" level=info msg="Loading containers: start." Dec 12 19:41:27.153120 kernel: Initializing XFRM netlink socket Dec 12 19:41:27.438142 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Dec 12 19:41:27.513961 systemd-networkd[1501]: docker0: Link UP Dec 12 19:41:27.518173 dockerd[1910]: time="2025-12-12T19:41:27.518120018Z" level=info msg="Loading containers: done." Dec 12 19:41:27.539970 dockerd[1910]: time="2025-12-12T19:41:27.539871068Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 19:41:27.540212 dockerd[1910]: time="2025-12-12T19:41:27.539997692Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 19:41:27.540212 dockerd[1910]: time="2025-12-12T19:41:27.540168570Z" level=info msg="Initializing buildkit" Dec 12 19:41:27.570235 dockerd[1910]: time="2025-12-12T19:41:27.570158998Z" level=info msg="Completed buildkit initialization" Dec 12 19:41:27.580732 dockerd[1910]: time="2025-12-12T19:41:27.580680890Z" level=info msg="Daemon has completed initialization" Dec 12 19:41:27.580732 dockerd[1910]: time="2025-12-12T19:41:27.580799288Z" level=info msg="API listen on /run/docker.sock" Dec 12 19:41:27.582789 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 19:41:28.090205 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck667313700-merged.mount: Deactivated successfully. Dec 12 19:41:28.808731 containerd[1566]: time="2025-12-12T19:41:28.808641933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 19:41:28.994285 systemd-timesyncd[1471]: Contacted time server [2a01:7e00::f03c:93ff:fe0e:ba3]:123 (2.flatcar.pool.ntp.org). Dec 12 19:41:28.994401 systemd-timesyncd[1471]: Initial clock synchronization to Fri 2025-12-12 19:41:28.748330 UTC. Dec 12 19:41:29.766774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171204039.mount: Deactivated successfully. Dec 12 19:41:32.877787 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 19:41:33.588711 containerd[1566]: time="2025-12-12T19:41:33.588638534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:33.589998 containerd[1566]: time="2025-12-12T19:41:33.589964107Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072191" Dec 12 19:41:33.591125 containerd[1566]: time="2025-12-12T19:41:33.590583254Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:33.594117 containerd[1566]: time="2025-12-12T19:41:33.593794775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:33.596016 containerd[1566]: time="2025-12-12T19:41:33.595336360Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 4.786584148s" Dec 12 19:41:33.596016 containerd[1566]: time="2025-12-12T19:41:33.595393617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 19:41:33.597625 containerd[1566]: time="2025-12-12T19:41:33.597599120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 19:41:34.637488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 19:41:34.641323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:34.875823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:34.892400 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:41:34.959432 kubelet[2193]: E1212 19:41:34.959340 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:41:34.961763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:41:34.962015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:41:34.963047 systemd[1]: kubelet.service: Consumed 264ms CPU time, 107.6M memory peak. Dec 12 19:41:36.201237 containerd[1566]: time="2025-12-12T19:41:36.201140704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:36.203700 containerd[1566]: time="2025-12-12T19:41:36.203661933Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992018" Dec 12 19:41:36.205169 containerd[1566]: time="2025-12-12T19:41:36.205108669Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:36.207851 containerd[1566]: time="2025-12-12T19:41:36.207791732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:36.211499 containerd[1566]: time="2025-12-12T19:41:36.211457978Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.613715013s" Dec 12 19:41:36.212746 containerd[1566]: time="2025-12-12T19:41:36.211613959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 19:41:36.213759 containerd[1566]: time="2025-12-12T19:41:36.213726062Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 19:41:38.162883 containerd[1566]: time="2025-12-12T19:41:38.162807015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:38.164225 containerd[1566]: time="2025-12-12T19:41:38.164186728Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404256" Dec 12 19:41:38.165653 containerd[1566]: time="2025-12-12T19:41:38.165575341Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:38.169125 containerd[1566]: time="2025-12-12T19:41:38.168837274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:38.171269 containerd[1566]: time="2025-12-12T19:41:38.171223116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.957455096s" Dec 12 19:41:38.171344 containerd[1566]: time="2025-12-12T19:41:38.171270309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 19:41:38.172290 containerd[1566]: time="2025-12-12T19:41:38.172031730Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 19:41:39.727909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299742249.mount: Deactivated successfully. Dec 12 19:41:40.555449 containerd[1566]: time="2025-12-12T19:41:40.555378267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:40.557160 containerd[1566]: time="2025-12-12T19:41:40.557117571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161431" Dec 12 19:41:40.558352 containerd[1566]: time="2025-12-12T19:41:40.558302086Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:40.561454 containerd[1566]: time="2025-12-12T19:41:40.561411105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:40.563209 containerd[1566]: time="2025-12-12T19:41:40.563160454Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.391085284s" Dec 12 19:41:40.563209 containerd[1566]: time="2025-12-12T19:41:40.563209583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 19:41:40.564001 containerd[1566]: time="2025-12-12T19:41:40.563959369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 19:41:41.254614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22922712.mount: Deactivated successfully. Dec 12 19:41:42.567115 containerd[1566]: time="2025-12-12T19:41:42.565863427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:42.567115 containerd[1566]: time="2025-12-12T19:41:42.566750034Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Dec 12 19:41:42.567831 containerd[1566]: time="2025-12-12T19:41:42.567796161Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:42.572377 containerd[1566]: time="2025-12-12T19:41:42.572324004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:42.574882 containerd[1566]: time="2025-12-12T19:41:42.574842229Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.010841054s" Dec 12 19:41:42.574882 containerd[1566]: time="2025-12-12T19:41:42.574881540Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 19:41:42.575729 containerd[1566]: time="2025-12-12T19:41:42.575699039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 19:41:43.227540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446626771.mount: Deactivated successfully. Dec 12 19:41:43.235039 containerd[1566]: time="2025-12-12T19:41:43.234979241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 19:41:43.237076 containerd[1566]: time="2025-12-12T19:41:43.236998610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Dec 12 19:41:43.237875 containerd[1566]: time="2025-12-12T19:41:43.237794643Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 19:41:43.241415 containerd[1566]: time="2025-12-12T19:41:43.241355757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 19:41:43.243355 containerd[1566]: time="2025-12-12T19:41:43.243302986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.453862ms" Dec 12 19:41:43.243355 containerd[1566]: time="2025-12-12T19:41:43.243351842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 19:41:43.243991 containerd[1566]: time="2025-12-12T19:41:43.243949610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 19:41:43.968867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351326994.mount: Deactivated successfully. Dec 12 19:41:44.634347 systemd[1]: Started sshd@10-10.244.20.246:22-157.245.76.79:49502.service - OpenSSH per-connection server daemon (157.245.76.79:49502). Dec 12 19:41:44.753919 sshd[2322]: Invalid user webmaster from 157.245.76.79 port 49502 Dec 12 19:41:44.771241 sshd[2322]: Connection closed by invalid user webmaster 157.245.76.79 port 49502 [preauth] Dec 12 19:41:44.773577 systemd[1]: sshd@10-10.244.20.246:22-157.245.76.79:49502.service: Deactivated successfully. Dec 12 19:41:45.016194 update_engine[1535]: I20251212 19:41:45.015906 1535 update_attempter.cc:509] Updating boot flags... Dec 12 19:41:45.033355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 19:41:45.039382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:45.604984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:45.616900 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:41:45.733239 kubelet[2351]: E1212 19:41:45.733131 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:41:45.737004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:41:45.737668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:41:45.738612 systemd[1]: kubelet.service: Consumed 225ms CPU time, 108.6M memory peak. Dec 12 19:41:48.206528 containerd[1566]: time="2025-12-12T19:41:48.206452041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:48.207964 containerd[1566]: time="2025-12-12T19:41:48.207865989Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Dec 12 19:41:48.209130 containerd[1566]: time="2025-12-12T19:41:48.208983433Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:48.212714 containerd[1566]: time="2025-12-12T19:41:48.212677608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:41:48.214870 containerd[1566]: time="2025-12-12T19:41:48.214432879Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.970434922s" Dec 12 19:41:48.214870 containerd[1566]: time="2025-12-12T19:41:48.214476279Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 19:41:51.986470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:51.986929 systemd[1]: kubelet.service: Consumed 225ms CPU time, 108.6M memory peak. Dec 12 19:41:51.990503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:52.036328 systemd[1]: Reload requested from client PID 2390 ('systemctl') (unit session-11.scope)... Dec 12 19:41:52.036384 systemd[1]: Reloading... Dec 12 19:41:52.216150 zram_generator::config[2435]: No configuration found. Dec 12 19:41:52.557355 systemd[1]: Reloading finished in 519 ms. Dec 12 19:41:52.621732 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 19:41:52.621874 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 19:41:52.622346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:52.622424 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.1M memory peak. Dec 12 19:41:52.624698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:41:52.869660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:41:52.885979 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 19:41:53.080774 kubelet[2502]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:41:53.080774 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 19:41:53.080774 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:41:53.081476 kubelet[2502]: I1212 19:41:53.080778 2502 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 19:41:53.727786 kubelet[2502]: I1212 19:41:53.727689 2502 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 19:41:53.727786 kubelet[2502]: I1212 19:41:53.727740 2502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 19:41:53.728237 kubelet[2502]: I1212 19:41:53.728181 2502 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 19:41:53.765166 kubelet[2502]: E1212 19:41:53.765056 2502 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.20.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:53.767942 kubelet[2502]: I1212 19:41:53.767831 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 19:41:53.780211 kubelet[2502]: I1212 19:41:53.780172 2502 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 19:41:53.790226 kubelet[2502]: I1212 19:41:53.790184 2502 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 19:41:53.792543 kubelet[2502]: I1212 19:41:53.792493 2502 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 19:41:53.792975 kubelet[2502]: I1212 19:41:53.792547 2502 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-tupcq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 19:41:53.795408 kubelet[2502]: I1212 19:41:53.795365 2502 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 19:41:53.795408 kubelet[2502]: I1212 19:41:53.795400 2502 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 19:41:53.796885 kubelet[2502]: I1212 19:41:53.796852 2502 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:41:53.803761 kubelet[2502]: I1212 19:41:53.803279 2502 kubelet.go:446] "Attempting to sync node with API server" Dec 12 19:41:53.803761 kubelet[2502]: I1212 19:41:53.803339 2502 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 19:41:53.803761 kubelet[2502]: I1212 19:41:53.803397 2502 kubelet.go:352] "Adding apiserver pod source" Dec 12 19:41:53.803761 kubelet[2502]: I1212 19:41:53.803431 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 19:41:53.806053 kubelet[2502]: W1212 19:41:53.805973 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.20.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tupcq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:53.806276 kubelet[2502]: E1212 19:41:53.806244 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.20.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tupcq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:53.807106 kubelet[2502]: W1212 19:41:53.806748 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.20.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:53.807313 kubelet[2502]: E1212 19:41:53.807280 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.20.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:53.808853 kubelet[2502]: I1212 19:41:53.808825 2502 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 19:41:53.812369 kubelet[2502]: I1212 19:41:53.812343 2502 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 19:41:53.812593 kubelet[2502]: W1212 19:41:53.812571 2502 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 19:41:53.817798 kubelet[2502]: I1212 19:41:53.817767 2502 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 19:41:53.817877 kubelet[2502]: I1212 19:41:53.817842 2502 server.go:1287] "Started kubelet" Dec 12 19:41:53.818074 kubelet[2502]: I1212 19:41:53.818024 2502 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 19:41:53.820994 kubelet[2502]: I1212 19:41:53.820969 2502 server.go:479] "Adding debug handlers to kubelet server" Dec 12 19:41:53.824488 kubelet[2502]: I1212 19:41:53.823920 2502 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 19:41:53.824488 kubelet[2502]: I1212 19:41:53.824401 2502 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 19:41:53.832589 kubelet[2502]: E1212 19:41:53.825854 2502 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.20.246:6443/api/v1/namespaces/default/events\": dial tcp 10.244.20.246:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-tupcq.gb1.brightbox.com.18808f357cb95658 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-tupcq.gb1.brightbox.com,UID:srv-tupcq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-tupcq.gb1.brightbox.com,},FirstTimestamp:2025-12-12 19:41:53.817794136 +0000 UTC m=+0.800851586,LastTimestamp:2025-12-12 19:41:53.817794136 +0000 UTC m=+0.800851586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-tupcq.gb1.brightbox.com,}" Dec 12 19:41:53.837660 kubelet[2502]: I1212 19:41:53.837600 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 19:41:53.839511 kubelet[2502]: I1212 19:41:53.839460 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 19:41:53.840047 kubelet[2502]: I1212 19:41:53.839972 2502 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 19:41:53.840408 kubelet[2502]: E1212 19:41:53.840357 2502 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-tupcq.gb1.brightbox.com\" not found" Dec 12 19:41:53.840911 kubelet[2502]: I1212 19:41:53.840883 2502 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 19:41:53.841188 kubelet[2502]: I1212 19:41:53.841132 2502 reconciler.go:26] "Reconciler: start to sync state" Dec 12 19:41:53.844586 kubelet[2502]: E1212 19:41:53.844554 2502 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 19:41:53.844915 kubelet[2502]: I1212 19:41:53.844884 2502 factory.go:221] Registration of the systemd container factory successfully Dec 12 19:41:53.845081 kubelet[2502]: E1212 19:41:53.844887 2502 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.20.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tupcq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.20.246:6443: connect: connection refused" interval="200ms" Dec 12 19:41:53.845206 kubelet[2502]: I1212 19:41:53.845029 2502 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 19:41:53.847834 kubelet[2502]: W1212 19:41:53.847366 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.20.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:53.847834 kubelet[2502]: E1212 19:41:53.847438 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.20.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:53.850199 kubelet[2502]: I1212 19:41:53.850157 2502 factory.go:221] Registration of the containerd container factory successfully Dec 12 19:41:53.875114 kubelet[2502]: I1212 19:41:53.872397 2502 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 19:41:53.875114 kubelet[2502]: I1212 19:41:53.873916 2502 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 19:41:53.875114 kubelet[2502]: I1212 19:41:53.873960 2502 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 19:41:53.875114 kubelet[2502]: I1212 19:41:53.874002 2502 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 19:41:53.875114 kubelet[2502]: I1212 19:41:53.874014 2502 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 19:41:53.875114 kubelet[2502]: E1212 19:41:53.874237 2502 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 19:41:53.884678 kubelet[2502]: I1212 19:41:53.884651 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 19:41:53.884678 kubelet[2502]: I1212 19:41:53.884674 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 19:41:53.884864 kubelet[2502]: I1212 19:41:53.884705 2502 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:41:53.885061 kubelet[2502]: W1212 19:41:53.884611 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.20.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:53.885547 kubelet[2502]: E1212 19:41:53.885504 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.20.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:53.886678 kubelet[2502]: I1212 19:41:53.886654 2502 policy_none.go:49] "None policy: Start" Dec 12 19:41:53.886770 kubelet[2502]: I1212 19:41:53.886688 2502 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 19:41:53.886770 kubelet[2502]: I1212 19:41:53.886719 2502 state_mem.go:35] "Initializing new in-memory state store" Dec 12 19:41:53.896363 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 19:41:53.910449 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 19:41:53.915948 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 19:41:53.928143 kubelet[2502]: I1212 19:41:53.928112 2502 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 19:41:53.928804 kubelet[2502]: I1212 19:41:53.928784 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 19:41:53.929047 kubelet[2502]: I1212 19:41:53.928956 2502 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 19:41:53.932293 kubelet[2502]: E1212 19:41:53.932265 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 19:41:53.932551 kubelet[2502]: E1212 19:41:53.932515 2502 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-tupcq.gb1.brightbox.com\" not found" Dec 12 19:41:53.936433 kubelet[2502]: I1212 19:41:53.936411 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 19:41:53.991058 systemd[1]: Created slice kubepods-burstable-podd53eaee480b8cc7fc1913717047ebcb3.slice - libcontainer container kubepods-burstable-podd53eaee480b8cc7fc1913717047ebcb3.slice. Dec 12 19:41:54.003478 kubelet[2502]: E1212 19:41:54.003413 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.008711 systemd[1]: Created slice kubepods-burstable-podfef7b48adfd3d02e1ad5e903b4d40a2d.slice - libcontainer container kubepods-burstable-podfef7b48adfd3d02e1ad5e903b4d40a2d.slice. Dec 12 19:41:54.022070 kubelet[2502]: E1212 19:41:54.022026 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.026651 systemd[1]: Created slice kubepods-burstable-pod7632313986331b0607c379693c63e54f.slice - libcontainer container kubepods-burstable-pod7632313986331b0607c379693c63e54f.slice. Dec 12 19:41:54.029685 kubelet[2502]: E1212 19:41:54.029652 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.034916 kubelet[2502]: I1212 19:41:54.034892 2502 kubelet_node_status.go:75] "Attempting to register node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.035730 kubelet[2502]: E1212 19:41:54.035700 2502 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.20.246:6443/api/v1/nodes\": dial tcp 10.244.20.246:6443: connect: connection refused" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.045655 kubelet[2502]: E1212 19:41:54.045618 2502 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.20.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tupcq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.20.246:6443: connect: connection refused" interval="400ms" Dec 12 19:41:54.142140 kubelet[2502]: I1212 19:41:54.142042 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-ca-certs\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143518 kubelet[2502]: I1212 19:41:54.142971 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-k8s-certs\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143518 kubelet[2502]: I1212 19:41:54.143066 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fef7b48adfd3d02e1ad5e903b4d40a2d-kubeconfig\") pod \"kube-scheduler-srv-tupcq.gb1.brightbox.com\" (UID: \"fef7b48adfd3d02e1ad5e903b4d40a2d\") " pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143518 kubelet[2502]: I1212 19:41:54.143169 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7632313986331b0607c379693c63e54f-k8s-certs\") pod \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" (UID: \"7632313986331b0607c379693c63e54f\") " pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143518 kubelet[2502]: I1212 19:41:54.143203 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7632313986331b0607c379693c63e54f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" (UID: \"7632313986331b0607c379693c63e54f\") " pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143518 kubelet[2502]: I1212 19:41:54.143270 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-flexvolume-dir\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143821 kubelet[2502]: I1212 19:41:54.143381 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-kubeconfig\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143821 kubelet[2502]: I1212 19:41:54.143416 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.143821 kubelet[2502]: I1212 19:41:54.143444 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7632313986331b0607c379693c63e54f-ca-certs\") pod \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" (UID: \"7632313986331b0607c379693c63e54f\") " pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.239955 kubelet[2502]: I1212 19:41:54.239897 2502 kubelet_node_status.go:75] "Attempting to register node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.240885 kubelet[2502]: E1212 19:41:54.240822 2502 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.20.246:6443/api/v1/nodes\": dial tcp 10.244.20.246:6443: connect: connection refused" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.309155 containerd[1566]: time="2025-12-12T19:41:54.308680406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-tupcq.gb1.brightbox.com,Uid:d53eaee480b8cc7fc1913717047ebcb3,Namespace:kube-system,Attempt:0,}" Dec 12 19:41:54.334296 containerd[1566]: time="2025-12-12T19:41:54.334236201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-tupcq.gb1.brightbox.com,Uid:fef7b48adfd3d02e1ad5e903b4d40a2d,Namespace:kube-system,Attempt:0,}" Dec 12 19:41:54.346251 containerd[1566]: time="2025-12-12T19:41:54.346192727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-tupcq.gb1.brightbox.com,Uid:7632313986331b0607c379693c63e54f,Namespace:kube-system,Attempt:0,}" Dec 12 19:41:54.448715 kubelet[2502]: E1212 19:41:54.448296 2502 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.20.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tupcq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.20.246:6443: connect: connection refused" interval="800ms" Dec 12 19:41:54.518493 containerd[1566]: time="2025-12-12T19:41:54.518394596Z" level=info msg="connecting to shim 9eb1b974114b178326e713170bf95091fa1e9dae5329fea5b26a6a936bfd263a" address="unix:///run/containerd/s/c9d47435fc67ccdbae26f5ab36f441633e96e724f02e9f879f006e2ed9af25e0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:41:54.524524 containerd[1566]: time="2025-12-12T19:41:54.524467252Z" level=info msg="connecting to shim 1965ba30248c07d002560823ed4f9ac27802ffd30d560bdf0f29adc7ecdd1527" address="unix:///run/containerd/s/0620ce466e5eeac271ef7fe8992cbca84a5d42ced34ff6d9a7244a42524f4977" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:41:54.524915 containerd[1566]: time="2025-12-12T19:41:54.524495360Z" level=info msg="connecting to shim cfbaaad72f4fa0f0786905ba9efd3b2d5db83bf0de0406896e949cc4c79c6624" address="unix:///run/containerd/s/7625c52976504e69391e49742e3da9c017cb62040ceba88190b9a0ec6d454c99" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:41:54.657363 kubelet[2502]: I1212 19:41:54.657311 2502 kubelet_node_status.go:75] "Attempting to register node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.659036 kubelet[2502]: E1212 19:41:54.658992 2502 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.20.246:6443/api/v1/nodes\": dial tcp 10.244.20.246:6443: connect: connection refused" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:54.667501 systemd[1]: Started cri-containerd-1965ba30248c07d002560823ed4f9ac27802ffd30d560bdf0f29adc7ecdd1527.scope - libcontainer container 1965ba30248c07d002560823ed4f9ac27802ffd30d560bdf0f29adc7ecdd1527. Dec 12 19:41:54.670619 systemd[1]: Started cri-containerd-9eb1b974114b178326e713170bf95091fa1e9dae5329fea5b26a6a936bfd263a.scope - libcontainer container 9eb1b974114b178326e713170bf95091fa1e9dae5329fea5b26a6a936bfd263a. Dec 12 19:41:54.673151 systemd[1]: Started cri-containerd-cfbaaad72f4fa0f0786905ba9efd3b2d5db83bf0de0406896e949cc4c79c6624.scope - libcontainer container cfbaaad72f4fa0f0786905ba9efd3b2d5db83bf0de0406896e949cc4c79c6624. Dec 12 19:41:54.707599 kubelet[2502]: W1212 19:41:54.706818 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.20.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:54.708117 kubelet[2502]: E1212 19:41:54.708015 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.20.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:54.769267 kubelet[2502]: W1212 19:41:54.769185 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.20.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tupcq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:54.769484 kubelet[2502]: E1212 19:41:54.769277 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.20.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-tupcq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:54.836182 containerd[1566]: time="2025-12-12T19:41:54.836121907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-tupcq.gb1.brightbox.com,Uid:7632313986331b0607c379693c63e54f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfbaaad72f4fa0f0786905ba9efd3b2d5db83bf0de0406896e949cc4c79c6624\"" Dec 12 19:41:54.850275 containerd[1566]: time="2025-12-12T19:41:54.850197934Z" level=info msg="CreateContainer within sandbox \"cfbaaad72f4fa0f0786905ba9efd3b2d5db83bf0de0406896e949cc4c79c6624\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 19:41:54.851577 containerd[1566]: time="2025-12-12T19:41:54.851535828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-tupcq.gb1.brightbox.com,Uid:d53eaee480b8cc7fc1913717047ebcb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb1b974114b178326e713170bf95091fa1e9dae5329fea5b26a6a936bfd263a\"" Dec 12 19:41:54.874360 containerd[1566]: time="2025-12-12T19:41:54.874064067Z" level=info msg="CreateContainer within sandbox \"9eb1b974114b178326e713170bf95091fa1e9dae5329fea5b26a6a936bfd263a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 19:41:54.876453 containerd[1566]: time="2025-12-12T19:41:54.876379749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-tupcq.gb1.brightbox.com,Uid:fef7b48adfd3d02e1ad5e903b4d40a2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1965ba30248c07d002560823ed4f9ac27802ffd30d560bdf0f29adc7ecdd1527\"" Dec 12 19:41:54.882416 containerd[1566]: time="2025-12-12T19:41:54.882365023Z" level=info msg="CreateContainer within sandbox \"1965ba30248c07d002560823ed4f9ac27802ffd30d560bdf0f29adc7ecdd1527\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 19:41:54.886339 containerd[1566]: time="2025-12-12T19:41:54.886306031Z" level=info msg="Container 4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:41:54.897022 containerd[1566]: time="2025-12-12T19:41:54.896963960Z" level=info msg="Container 2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:41:54.898901 containerd[1566]: time="2025-12-12T19:41:54.898839749Z" level=info msg="Container f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:41:54.902735 containerd[1566]: time="2025-12-12T19:41:54.902698921Z" level=info msg="CreateContainer within sandbox \"cfbaaad72f4fa0f0786905ba9efd3b2d5db83bf0de0406896e949cc4c79c6624\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce\"" Dec 12 19:41:54.904051 containerd[1566]: time="2025-12-12T19:41:54.903593202Z" level=info msg="StartContainer for \"4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce\"" Dec 12 19:41:54.909661 containerd[1566]: time="2025-12-12T19:41:54.909057382Z" level=info msg="CreateContainer within sandbox \"1965ba30248c07d002560823ed4f9ac27802ffd30d560bdf0f29adc7ecdd1527\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473\"" Dec 12 19:41:54.910696 containerd[1566]: time="2025-12-12T19:41:54.910665578Z" level=info msg="StartContainer for \"2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473\"" Dec 12 19:41:54.911805 containerd[1566]: time="2025-12-12T19:41:54.911771054Z" level=info msg="CreateContainer within sandbox \"9eb1b974114b178326e713170bf95091fa1e9dae5329fea5b26a6a936bfd263a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57\"" Dec 12 19:41:54.911910 containerd[1566]: time="2025-12-12T19:41:54.911029898Z" level=info msg="connecting to shim 4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce" address="unix:///run/containerd/s/7625c52976504e69391e49742e3da9c017cb62040ceba88190b9a0ec6d454c99" protocol=ttrpc version=3 Dec 12 19:41:54.913328 containerd[1566]: time="2025-12-12T19:41:54.913289842Z" level=info msg="connecting to shim 2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473" address="unix:///run/containerd/s/0620ce466e5eeac271ef7fe8992cbca84a5d42ced34ff6d9a7244a42524f4977" protocol=ttrpc version=3 Dec 12 19:41:54.916137 containerd[1566]: time="2025-12-12T19:41:54.914258609Z" level=info msg="StartContainer for \"f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57\"" Dec 12 19:41:54.917583 containerd[1566]: time="2025-12-12T19:41:54.917494312Z" level=info msg="connecting to shim f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57" address="unix:///run/containerd/s/c9d47435fc67ccdbae26f5ab36f441633e96e724f02e9f879f006e2ed9af25e0" protocol=ttrpc version=3 Dec 12 19:41:54.938899 kubelet[2502]: W1212 19:41:54.938622 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.20.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:54.938899 kubelet[2502]: E1212 19:41:54.938728 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.20.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:54.956451 systemd[1]: Started cri-containerd-2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473.scope - libcontainer container 2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473. Dec 12 19:41:54.970379 systemd[1]: Started cri-containerd-4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce.scope - libcontainer container 4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce. Dec 12 19:41:54.987529 systemd[1]: Started cri-containerd-f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57.scope - libcontainer container f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57. Dec 12 19:41:55.029128 kubelet[2502]: W1212 19:41:55.028696 2502 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.20.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.20.246:6443: connect: connection refused Dec 12 19:41:55.030434 kubelet[2502]: E1212 19:41:55.029043 2502 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.20.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.20.246:6443: connect: connection refused" logger="UnhandledError" Dec 12 19:41:55.117804 containerd[1566]: time="2025-12-12T19:41:55.117727054Z" level=info msg="StartContainer for \"4cbf7ac5f693b70668088482581feea7b2cf2fa0963fdd10781a1706bd27c3ce\" returns successfully" Dec 12 19:41:55.135590 containerd[1566]: time="2025-12-12T19:41:55.135508878Z" level=info msg="StartContainer for \"f831310e4fde791f9312f844a05ca90494366e698d2c2cedd82d2069d68a3a57\" returns successfully" Dec 12 19:41:55.146313 containerd[1566]: time="2025-12-12T19:41:55.146260055Z" level=info msg="StartContainer for \"2fd243ce6047bd0f877570d6f124afdf75cd42e00489d499d281610e7ea0f473\" returns successfully" Dec 12 19:41:55.249908 kubelet[2502]: E1212 19:41:55.249710 2502 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.20.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-tupcq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.20.246:6443: connect: connection refused" interval="1.6s" Dec 12 19:41:55.463814 kubelet[2502]: I1212 19:41:55.463773 2502 kubelet_node_status.go:75] "Attempting to register node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:55.465099 kubelet[2502]: E1212 19:41:55.464721 2502 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.20.246:6443/api/v1/nodes\": dial tcp 10.244.20.246:6443: connect: connection refused" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:55.913765 kubelet[2502]: E1212 19:41:55.913725 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:55.921884 kubelet[2502]: E1212 19:41:55.921844 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:55.927572 kubelet[2502]: E1212 19:41:55.927539 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:56.933643 kubelet[2502]: E1212 19:41:56.933602 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:56.934351 kubelet[2502]: E1212 19:41:56.934071 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:56.935599 kubelet[2502]: E1212 19:41:56.935574 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:57.070569 kubelet[2502]: I1212 19:41:57.070525 2502 kubelet_node_status.go:75] "Attempting to register node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:57.938963 kubelet[2502]: E1212 19:41:57.938912 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:57.941046 kubelet[2502]: E1212 19:41:57.940226 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:57.941046 kubelet[2502]: E1212 19:41:57.940801 2502 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:57.963402 kubelet[2502]: E1212 19:41:57.963345 2502 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-tupcq.gb1.brightbox.com\" not found" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.078029 kubelet[2502]: I1212 19:41:58.077921 2502 kubelet_node_status.go:78] "Successfully registered node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.141181 kubelet[2502]: I1212 19:41:58.141065 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.154467 kubelet[2502]: E1212 19:41:58.154413 2502 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.154923 kubelet[2502]: I1212 19:41:58.154463 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.163956 kubelet[2502]: E1212 19:41:58.163858 2502 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.165223 kubelet[2502]: I1212 19:41:58.165198 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.167948 kubelet[2502]: E1212 19:41:58.167903 2502 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-tupcq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.809268 kubelet[2502]: I1212 19:41:58.809190 2502 apiserver.go:52] "Watching apiserver" Dec 12 19:41:58.841658 kubelet[2502]: I1212 19:41:58.841580 2502 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 19:41:58.935809 kubelet[2502]: I1212 19:41:58.935726 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:41:58.946506 kubelet[2502]: W1212 19:41:58.946258 2502 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:41:59.973720 systemd[1]: Reload requested from client PID 2775 ('systemctl') (unit session-11.scope)... Dec 12 19:41:59.973748 systemd[1]: Reloading... Dec 12 19:42:00.132570 zram_generator::config[2820]: No configuration found. Dec 12 19:42:00.525929 systemd[1]: Reloading finished in 550 ms. Dec 12 19:42:00.575526 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:42:00.594930 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 19:42:00.595488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:42:00.595596 systemd[1]: kubelet.service: Consumed 1.415s CPU time, 128.7M memory peak. Dec 12 19:42:00.598584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:42:00.909773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:42:00.923724 (kubelet)[2884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 19:42:01.037671 kubelet[2884]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:42:01.037671 kubelet[2884]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 19:42:01.037671 kubelet[2884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:42:01.037671 kubelet[2884]: I1212 19:42:01.037438 2884 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 19:42:01.053340 kubelet[2884]: I1212 19:42:01.053285 2884 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 19:42:01.053774 kubelet[2884]: I1212 19:42:01.053535 2884 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 19:42:01.054027 kubelet[2884]: I1212 19:42:01.054005 2884 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 19:42:01.064018 kubelet[2884]: I1212 19:42:01.063966 2884 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 19:42:01.084954 kubelet[2884]: I1212 19:42:01.084635 2884 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 19:42:01.097762 kubelet[2884]: I1212 19:42:01.097730 2884 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 19:42:01.104909 kubelet[2884]: I1212 19:42:01.104641 2884 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 19:42:01.105394 kubelet[2884]: I1212 19:42:01.105341 2884 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 19:42:01.105730 kubelet[2884]: I1212 19:42:01.105510 2884 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-tupcq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 19:42:01.106038 kubelet[2884]: I1212 19:42:01.106015 2884 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 19:42:01.106237 kubelet[2884]: I1212 19:42:01.106147 2884 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 19:42:01.106237 kubelet[2884]: I1212 19:42:01.107183 2884 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:42:01.109126 kubelet[2884]: I1212 19:42:01.108777 2884 kubelet.go:446] "Attempting to sync node with API server" Dec 12 19:42:01.109286 kubelet[2884]: I1212 19:42:01.109266 2884 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 19:42:01.109478 kubelet[2884]: I1212 19:42:01.109440 2884 kubelet.go:352] "Adding apiserver pod source" Dec 12 19:42:01.110187 kubelet[2884]: I1212 19:42:01.109579 2884 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 19:42:01.121328 kubelet[2884]: I1212 19:42:01.121288 2884 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 19:42:01.123134 kubelet[2884]: I1212 19:42:01.123112 2884 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 19:42:01.123850 kubelet[2884]: I1212 19:42:01.123827 2884 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 19:42:01.124000 kubelet[2884]: I1212 19:42:01.123981 2884 server.go:1287] "Started kubelet" Dec 12 19:42:01.129777 kubelet[2884]: I1212 19:42:01.129746 2884 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 19:42:01.148555 kubelet[2884]: I1212 19:42:01.148340 2884 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 19:42:01.162212 kubelet[2884]: I1212 19:42:01.161877 2884 server.go:479] "Adding debug handlers to kubelet server" Dec 12 19:42:01.169577 kubelet[2884]: I1212 19:42:01.148807 2884 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 19:42:01.185932 kubelet[2884]: I1212 19:42:01.178296 2884 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 19:42:01.193303 kubelet[2884]: I1212 19:42:01.179745 2884 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 19:42:01.193693 kubelet[2884]: I1212 19:42:01.179880 2884 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 19:42:01.193693 kubelet[2884]: E1212 19:42:01.180399 2884 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-tupcq.gb1.brightbox.com\" not found" Dec 12 19:42:01.194822 kubelet[2884]: I1212 19:42:01.193866 2884 reconciler.go:26] "Reconciler: start to sync state" Dec 12 19:42:01.196257 kubelet[2884]: I1212 19:42:01.196205 2884 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 19:42:01.201429 kubelet[2884]: I1212 19:42:01.201048 2884 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 19:42:01.230355 kubelet[2884]: I1212 19:42:01.229407 2884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 19:42:01.236436 kubelet[2884]: I1212 19:42:01.235755 2884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 19:42:01.236436 kubelet[2884]: I1212 19:42:01.235808 2884 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 19:42:01.236436 kubelet[2884]: I1212 19:42:01.235837 2884 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 19:42:01.236436 kubelet[2884]: I1212 19:42:01.235848 2884 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 19:42:01.236436 kubelet[2884]: E1212 19:42:01.235917 2884 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 19:42:01.249308 kubelet[2884]: E1212 19:42:01.248682 2884 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 19:42:01.254053 kubelet[2884]: I1212 19:42:01.252884 2884 factory.go:221] Registration of the containerd container factory successfully Dec 12 19:42:01.258199 kubelet[2884]: I1212 19:42:01.258166 2884 factory.go:221] Registration of the systemd container factory successfully Dec 12 19:42:01.336253 kubelet[2884]: E1212 19:42:01.336198 2884 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 19:42:01.397212 kubelet[2884]: I1212 19:42:01.397082 2884 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397524 2884 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397560 2884 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397876 2884 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397896 2884 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397933 2884 policy_none.go:49] "None policy: Start" Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397953 2884 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 19:42:01.398188 kubelet[2884]: I1212 19:42:01.397994 2884 state_mem.go:35] "Initializing new in-memory state store" Dec 12 19:42:01.398800 kubelet[2884]: I1212 19:42:01.398777 2884 state_mem.go:75] "Updated machine memory state" Dec 12 19:42:01.413145 kubelet[2884]: I1212 19:42:01.412252 2884 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 19:42:01.413145 kubelet[2884]: I1212 19:42:01.412538 2884 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 19:42:01.413145 kubelet[2884]: I1212 19:42:01.412556 2884 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 19:42:01.414974 kubelet[2884]: I1212 19:42:01.414951 2884 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 19:42:01.421850 kubelet[2884]: E1212 19:42:01.421815 2884 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 19:42:01.537942 kubelet[2884]: I1212 19:42:01.537892 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.543372 kubelet[2884]: I1212 19:42:01.543211 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.545899 kubelet[2884]: I1212 19:42:01.545296 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.554741 kubelet[2884]: W1212 19:42:01.554082 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:42:01.558574 kubelet[2884]: W1212 19:42:01.557364 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:42:01.559439 kubelet[2884]: I1212 19:42:01.559408 2884 kubelet_node_status.go:75] "Attempting to register node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.568154 kubelet[2884]: W1212 19:42:01.566468 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:42:01.568154 kubelet[2884]: E1212 19:42:01.566552 2884 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.580289 kubelet[2884]: I1212 19:42:01.579832 2884 kubelet_node_status.go:124] "Node was previously registered" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.580977 kubelet[2884]: I1212 19:42:01.580897 2884 kubelet_node_status.go:78] "Successfully registered node" node="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.597637 kubelet[2884]: I1212 19:42:01.597503 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7632313986331b0607c379693c63e54f-ca-certs\") pod \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" (UID: \"7632313986331b0607c379693c63e54f\") " pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598018 kubelet[2884]: I1212 19:42:01.597605 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7632313986331b0607c379693c63e54f-k8s-certs\") pod \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" (UID: \"7632313986331b0607c379693c63e54f\") " pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598018 kubelet[2884]: I1212 19:42:01.597904 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-kubeconfig\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598018 kubelet[2884]: I1212 19:42:01.597966 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598387 kubelet[2884]: I1212 19:42:01.598145 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7632313986331b0607c379693c63e54f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" (UID: \"7632313986331b0607c379693c63e54f\") " pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598655 kubelet[2884]: I1212 19:42:01.598195 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-ca-certs\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598655 kubelet[2884]: I1212 19:42:01.598532 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-flexvolume-dir\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598655 kubelet[2884]: I1212 19:42:01.598582 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d53eaee480b8cc7fc1913717047ebcb3-k8s-certs\") pod \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" (UID: \"d53eaee480b8cc7fc1913717047ebcb3\") " pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:01.598655 kubelet[2884]: I1212 19:42:01.598610 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fef7b48adfd3d02e1ad5e903b4d40a2d-kubeconfig\") pod \"kube-scheduler-srv-tupcq.gb1.brightbox.com\" (UID: \"fef7b48adfd3d02e1ad5e903b4d40a2d\") " pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.112119 kubelet[2884]: I1212 19:42:02.111183 2884 apiserver.go:52] "Watching apiserver" Dec 12 19:42:02.194522 kubelet[2884]: I1212 19:42:02.194441 2884 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 19:42:02.327547 kubelet[2884]: I1212 19:42:02.327469 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.328846 kubelet[2884]: I1212 19:42:02.328314 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.329303 kubelet[2884]: I1212 19:42:02.329258 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.337057 kubelet[2884]: W1212 19:42:02.336994 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:42:02.337840 kubelet[2884]: E1212 19:42:02.337315 2884 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-tupcq.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.339797 kubelet[2884]: W1212 19:42:02.339315 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:42:02.339797 kubelet[2884]: E1212 19:42:02.339368 2884 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-tupcq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.339797 kubelet[2884]: W1212 19:42:02.339552 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 19:42:02.339797 kubelet[2884]: E1212 19:42:02.339592 2884 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-tupcq.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" Dec 12 19:42:02.396502 kubelet[2884]: I1212 19:42:02.395675 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-tupcq.gb1.brightbox.com" podStartSLOduration=4.395631315 podStartE2EDuration="4.395631315s" podCreationTimestamp="2025-12-12 19:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:42:02.374128419 +0000 UTC m=+1.440256507" watchObservedRunningTime="2025-12-12 19:42:02.395631315 +0000 UTC m=+1.461759393" Dec 12 19:42:02.396502 kubelet[2884]: I1212 19:42:02.395870 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-tupcq.gb1.brightbox.com" podStartSLOduration=1.395861992 podStartE2EDuration="1.395861992s" podCreationTimestamp="2025-12-12 19:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:42:02.392388434 +0000 UTC m=+1.458516537" watchObservedRunningTime="2025-12-12 19:42:02.395861992 +0000 UTC m=+1.461990101" Dec 12 19:42:02.408951 kubelet[2884]: I1212 19:42:02.408719 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-tupcq.gb1.brightbox.com" podStartSLOduration=1.408693657 podStartE2EDuration="1.408693657s" podCreationTimestamp="2025-12-12 19:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:42:02.407348816 +0000 UTC m=+1.473476910" watchObservedRunningTime="2025-12-12 19:42:02.408693657 +0000 UTC m=+1.474821766" Dec 12 19:42:05.610334 kubelet[2884]: I1212 19:42:05.610228 2884 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 19:42:05.611500 containerd[1566]: time="2025-12-12T19:42:05.611451768Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 19:42:05.612860 kubelet[2884]: I1212 19:42:05.611867 2884 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 19:42:06.517514 systemd[1]: Created slice kubepods-besteffort-poda0014ef6_8255_4c0e_a46f_5151d4c54b72.slice - libcontainer container kubepods-besteffort-poda0014ef6_8255_4c0e_a46f_5151d4c54b72.slice. Dec 12 19:42:06.531977 kubelet[2884]: I1212 19:42:06.531910 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0014ef6-8255-4c0e-a46f-5151d4c54b72-kube-proxy\") pod \"kube-proxy-ck4rv\" (UID: \"a0014ef6-8255-4c0e-a46f-5151d4c54b72\") " pod="kube-system/kube-proxy-ck4rv" Dec 12 19:42:06.531977 kubelet[2884]: I1212 19:42:06.531976 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0014ef6-8255-4c0e-a46f-5151d4c54b72-xtables-lock\") pod \"kube-proxy-ck4rv\" (UID: \"a0014ef6-8255-4c0e-a46f-5151d4c54b72\") " pod="kube-system/kube-proxy-ck4rv" Dec 12 19:42:06.532284 kubelet[2884]: I1212 19:42:06.532005 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0014ef6-8255-4c0e-a46f-5151d4c54b72-lib-modules\") pod \"kube-proxy-ck4rv\" (UID: \"a0014ef6-8255-4c0e-a46f-5151d4c54b72\") " pod="kube-system/kube-proxy-ck4rv" Dec 12 19:42:06.532284 kubelet[2884]: I1212 19:42:06.532033 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55pg\" (UniqueName: \"kubernetes.io/projected/a0014ef6-8255-4c0e-a46f-5151d4c54b72-kube-api-access-p55pg\") pod \"kube-proxy-ck4rv\" (UID: \"a0014ef6-8255-4c0e-a46f-5151d4c54b72\") " pod="kube-system/kube-proxy-ck4rv" Dec 12 19:42:06.734337 systemd[1]: Created slice kubepods-besteffort-pod91818ba2_91d5_4a14_ac50_303d59fc237b.slice - libcontainer container kubepods-besteffort-pod91818ba2_91d5_4a14_ac50_303d59fc237b.slice. Dec 12 19:42:06.829151 containerd[1566]: time="2025-12-12T19:42:06.828909829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ck4rv,Uid:a0014ef6-8255-4c0e-a46f-5151d4c54b72,Namespace:kube-system,Attempt:0,}" Dec 12 19:42:06.834257 kubelet[2884]: I1212 19:42:06.834205 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91818ba2-91d5-4a14-ac50-303d59fc237b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7dcrj\" (UID: \"91818ba2-91d5-4a14-ac50-303d59fc237b\") " pod="tigera-operator/tigera-operator-7dcd859c48-7dcrj" Dec 12 19:42:06.834873 kubelet[2884]: I1212 19:42:06.834785 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2m2k\" (UniqueName: \"kubernetes.io/projected/91818ba2-91d5-4a14-ac50-303d59fc237b-kube-api-access-g2m2k\") pod \"tigera-operator-7dcd859c48-7dcrj\" (UID: \"91818ba2-91d5-4a14-ac50-303d59fc237b\") " pod="tigera-operator/tigera-operator-7dcd859c48-7dcrj" Dec 12 19:42:06.860459 containerd[1566]: time="2025-12-12T19:42:06.860379661Z" level=info msg="connecting to shim f2c7a56a03afaa88a21b51ae7405623fc276aeb0b4501822637a74a5f7da3f36" address="unix:///run/containerd/s/6ee36060bdb93569a00f567922eb4366142e9eaeb3ee1787bdb4bd83400a1f18" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:06.914465 systemd[1]: Started cri-containerd-f2c7a56a03afaa88a21b51ae7405623fc276aeb0b4501822637a74a5f7da3f36.scope - libcontainer container f2c7a56a03afaa88a21b51ae7405623fc276aeb0b4501822637a74a5f7da3f36. Dec 12 19:42:06.993437 containerd[1566]: time="2025-12-12T19:42:06.993342544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ck4rv,Uid:a0014ef6-8255-4c0e-a46f-5151d4c54b72,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2c7a56a03afaa88a21b51ae7405623fc276aeb0b4501822637a74a5f7da3f36\"" Dec 12 19:42:07.002761 containerd[1566]: time="2025-12-12T19:42:07.002697585Z" level=info msg="CreateContainer within sandbox \"f2c7a56a03afaa88a21b51ae7405623fc276aeb0b4501822637a74a5f7da3f36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 19:42:07.016334 containerd[1566]: time="2025-12-12T19:42:07.016275245Z" level=info msg="Container 537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:07.028026 containerd[1566]: time="2025-12-12T19:42:07.027937219Z" level=info msg="CreateContainer within sandbox \"f2c7a56a03afaa88a21b51ae7405623fc276aeb0b4501822637a74a5f7da3f36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74\"" Dec 12 19:42:07.029670 containerd[1566]: time="2025-12-12T19:42:07.029274631Z" level=info msg="StartContainer for \"537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74\"" Dec 12 19:42:07.033342 containerd[1566]: time="2025-12-12T19:42:07.033306607Z" level=info msg="connecting to shim 537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74" address="unix:///run/containerd/s/6ee36060bdb93569a00f567922eb4366142e9eaeb3ee1787bdb4bd83400a1f18" protocol=ttrpc version=3 Dec 12 19:42:07.041500 containerd[1566]: time="2025-12-12T19:42:07.041412185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7dcrj,Uid:91818ba2-91d5-4a14-ac50-303d59fc237b,Namespace:tigera-operator,Attempt:0,}" Dec 12 19:42:07.068583 systemd[1]: Started cri-containerd-537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74.scope - libcontainer container 537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74. Dec 12 19:42:07.080589 containerd[1566]: time="2025-12-12T19:42:07.079786193Z" level=info msg="connecting to shim 2b7fe6c2eb7dd2fb332c497629baf440a57644b8a217404199b5bb65580e59f2" address="unix:///run/containerd/s/e857185bf7379dae849d7838d9f7fad05b220d3fbc613f75c49d85add17df684" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:07.136357 systemd[1]: Started cri-containerd-2b7fe6c2eb7dd2fb332c497629baf440a57644b8a217404199b5bb65580e59f2.scope - libcontainer container 2b7fe6c2eb7dd2fb332c497629baf440a57644b8a217404199b5bb65580e59f2. Dec 12 19:42:07.206919 containerd[1566]: time="2025-12-12T19:42:07.206807414Z" level=info msg="StartContainer for \"537974283a594a3721d081fe18618b550ddc6780fb905e346527f714867ffc74\" returns successfully" Dec 12 19:42:07.250889 containerd[1566]: time="2025-12-12T19:42:07.250833024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7dcrj,Uid:91818ba2-91d5-4a14-ac50-303d59fc237b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b7fe6c2eb7dd2fb332c497629baf440a57644b8a217404199b5bb65580e59f2\"" Dec 12 19:42:07.255564 containerd[1566]: time="2025-12-12T19:42:07.255410492Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 19:42:07.366852 kubelet[2884]: I1212 19:42:07.365910 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ck4rv" podStartSLOduration=1.365878577 podStartE2EDuration="1.365878577s" podCreationTimestamp="2025-12-12 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:42:07.365876119 +0000 UTC m=+6.432004222" watchObservedRunningTime="2025-12-12 19:42:07.365878577 +0000 UTC m=+6.432006670" Dec 12 19:42:07.657635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240883515.mount: Deactivated successfully. Dec 12 19:42:09.402015 systemd[1]: Started sshd@11-10.244.20.246:22-157.245.76.79:44908.service - OpenSSH per-connection server daemon (157.245.76.79:44908). Dec 12 19:42:09.589088 sshd[3184]: Invalid user webmaster from 157.245.76.79 port 44908 Dec 12 19:42:09.609438 sshd[3184]: Connection closed by invalid user webmaster 157.245.76.79 port 44908 [preauth] Dec 12 19:42:09.613105 systemd[1]: sshd@11-10.244.20.246:22-157.245.76.79:44908.service: Deactivated successfully. Dec 12 19:42:12.156724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136124495.mount: Deactivated successfully. Dec 12 19:42:13.696747 containerd[1566]: time="2025-12-12T19:42:13.696672861Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:13.701500 containerd[1566]: time="2025-12-12T19:42:13.701426711Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 19:42:13.706532 containerd[1566]: time="2025-12-12T19:42:13.706474438Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:13.709704 containerd[1566]: time="2025-12-12T19:42:13.709263949Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:13.710335 containerd[1566]: time="2025-12-12T19:42:13.710294487Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.454830508s" Dec 12 19:42:13.710421 containerd[1566]: time="2025-12-12T19:42:13.710340422Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 19:42:13.715584 containerd[1566]: time="2025-12-12T19:42:13.715526529Z" level=info msg="CreateContainer within sandbox \"2b7fe6c2eb7dd2fb332c497629baf440a57644b8a217404199b5bb65580e59f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 19:42:13.724902 containerd[1566]: time="2025-12-12T19:42:13.724256628Z" level=info msg="Container 7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:13.731262 containerd[1566]: time="2025-12-12T19:42:13.731217548Z" level=info msg="CreateContainer within sandbox \"2b7fe6c2eb7dd2fb332c497629baf440a57644b8a217404199b5bb65580e59f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb\"" Dec 12 19:42:13.732567 containerd[1566]: time="2025-12-12T19:42:13.732520823Z" level=info msg="StartContainer for \"7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb\"" Dec 12 19:42:13.735908 containerd[1566]: time="2025-12-12T19:42:13.735849533Z" level=info msg="connecting to shim 7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb" address="unix:///run/containerd/s/e857185bf7379dae849d7838d9f7fad05b220d3fbc613f75c49d85add17df684" protocol=ttrpc version=3 Dec 12 19:42:13.774453 systemd[1]: Started cri-containerd-7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb.scope - libcontainer container 7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb. Dec 12 19:42:13.828111 containerd[1566]: time="2025-12-12T19:42:13.828040693Z" level=info msg="StartContainer for \"7f615e2aa056d1f0d0988c22f3cb3fd76840198c9362937e99471c857adf9deb\" returns successfully" Dec 12 19:42:19.458726 sudo[1893]: pam_unix(sudo:session): session closed for user root Dec 12 19:42:19.614119 sshd[1892]: Connection closed by 147.75.109.163 port 56768 Dec 12 19:42:19.614142 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Dec 12 19:42:19.623052 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Dec 12 19:42:19.625544 systemd[1]: sshd@9-10.244.20.246:22-147.75.109.163:56768.service: Deactivated successfully. Dec 12 19:42:19.633726 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 19:42:19.635307 systemd[1]: session-11.scope: Consumed 6.333s CPU time, 157.9M memory peak. Dec 12 19:42:19.646408 systemd-logind[1534]: Removed session 11. Dec 12 19:42:26.525527 kubelet[2884]: I1212 19:42:26.525191 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7dcrj" podStartSLOduration=14.065830047 podStartE2EDuration="20.524932981s" podCreationTimestamp="2025-12-12 19:42:06 +0000 UTC" firstStartedPulling="2025-12-12 19:42:07.253278011 +0000 UTC m=+6.319406088" lastFinishedPulling="2025-12-12 19:42:13.712380938 +0000 UTC m=+12.778509022" observedRunningTime="2025-12-12 19:42:14.40221894 +0000 UTC m=+13.468347038" watchObservedRunningTime="2025-12-12 19:42:26.524932981 +0000 UTC m=+25.591061059" Dec 12 19:42:26.541824 systemd[1]: Created slice kubepods-besteffort-pod279c60d0_06d3_4812_80e2_1da9fed9b10c.slice - libcontainer container kubepods-besteffort-pod279c60d0_06d3_4812_80e2_1da9fed9b10c.slice. Dec 12 19:42:26.578410 kubelet[2884]: I1212 19:42:26.578343 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/279c60d0-06d3-4812-80e2-1da9fed9b10c-typha-certs\") pod \"calico-typha-78c8b85f9c-js598\" (UID: \"279c60d0-06d3-4812-80e2-1da9fed9b10c\") " pod="calico-system/calico-typha-78c8b85f9c-js598" Dec 12 19:42:26.578410 kubelet[2884]: I1212 19:42:26.578404 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd6bn\" (UniqueName: \"kubernetes.io/projected/279c60d0-06d3-4812-80e2-1da9fed9b10c-kube-api-access-sd6bn\") pod \"calico-typha-78c8b85f9c-js598\" (UID: \"279c60d0-06d3-4812-80e2-1da9fed9b10c\") " pod="calico-system/calico-typha-78c8b85f9c-js598" Dec 12 19:42:26.578691 kubelet[2884]: I1212 19:42:26.578438 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/279c60d0-06d3-4812-80e2-1da9fed9b10c-tigera-ca-bundle\") pod \"calico-typha-78c8b85f9c-js598\" (UID: \"279c60d0-06d3-4812-80e2-1da9fed9b10c\") " pod="calico-system/calico-typha-78c8b85f9c-js598" Dec 12 19:42:26.722837 systemd[1]: Created slice kubepods-besteffort-pod0f72b7c6_b7c2_4e20_96ec_d323c5be0331.slice - libcontainer container kubepods-besteffort-pod0f72b7c6_b7c2_4e20_96ec_d323c5be0331.slice. Dec 12 19:42:26.779978 kubelet[2884]: I1212 19:42:26.779802 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-xtables-lock\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.779978 kubelet[2884]: I1212 19:42:26.779870 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-cni-net-dir\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.779978 kubelet[2884]: I1212 19:42:26.779897 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-var-run-calico\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.779978 kubelet[2884]: I1212 19:42:26.779936 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-var-lib-calico\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.779978 kubelet[2884]: I1212 19:42:26.779966 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-lib-modules\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780418 kubelet[2884]: I1212 19:42:26.779991 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-policysync\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780418 kubelet[2884]: I1212 19:42:26.780018 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-tigera-ca-bundle\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780418 kubelet[2884]: I1212 19:42:26.780044 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-cni-log-dir\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780418 kubelet[2884]: I1212 19:42:26.780069 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-flexvol-driver-host\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780418 kubelet[2884]: I1212 19:42:26.780130 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-cni-bin-dir\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780643 kubelet[2884]: I1212 19:42:26.780175 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-node-certs\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.780643 kubelet[2884]: I1212 19:42:26.780203 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spbm\" (UniqueName: \"kubernetes.io/projected/0f72b7c6-b7c2-4e20-96ec-d323c5be0331-kube-api-access-6spbm\") pod \"calico-node-pt92m\" (UID: \"0f72b7c6-b7c2-4e20-96ec-d323c5be0331\") " pod="calico-system/calico-node-pt92m" Dec 12 19:42:26.855987 containerd[1566]: time="2025-12-12T19:42:26.855830179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c8b85f9c-js598,Uid:279c60d0-06d3-4812-80e2-1da9fed9b10c,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:26.917116 containerd[1566]: time="2025-12-12T19:42:26.916933193Z" level=info msg="connecting to shim 28ab749fa1c73cd5160ee79b9f039f9b0c6245b6145e25ca41e5a7a8ca7df913" address="unix:///run/containerd/s/12c74fa3f54d4edf43387fc3a0cc9bac0283405937eb5a35ebe502e5635adbf7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:26.929698 kubelet[2884]: E1212 19:42:26.929465 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:26.929698 kubelet[2884]: W1212 19:42:26.929518 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:26.930891 kubelet[2884]: E1212 19:42:26.930860 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:26.974488 kubelet[2884]: E1212 19:42:26.974435 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:26.974985 kubelet[2884]: W1212 19:42:26.974712 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:26.974985 kubelet[2884]: E1212 19:42:26.974751 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:26.979296 kubelet[2884]: E1212 19:42:26.979163 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:27.018444 systemd[1]: Started cri-containerd-28ab749fa1c73cd5160ee79b9f039f9b0c6245b6145e25ca41e5a7a8ca7df913.scope - libcontainer container 28ab749fa1c73cd5160ee79b9f039f9b0c6245b6145e25ca41e5a7a8ca7df913. Dec 12 19:42:27.032796 containerd[1566]: time="2025-12-12T19:42:27.032022785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pt92m,Uid:0f72b7c6-b7c2-4e20-96ec-d323c5be0331,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:27.071989 kubelet[2884]: E1212 19:42:27.071943 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.071989 kubelet[2884]: W1212 19:42:27.071977 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.071989 kubelet[2884]: E1212 19:42:27.072014 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.072997 kubelet[2884]: E1212 19:42:27.072589 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.072997 kubelet[2884]: W1212 19:42:27.072603 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.072997 kubelet[2884]: E1212 19:42:27.072618 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.073204 kubelet[2884]: E1212 19:42:27.073185 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.073204 kubelet[2884]: W1212 19:42:27.073200 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.073311 kubelet[2884]: E1212 19:42:27.073218 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.074253 kubelet[2884]: E1212 19:42:27.074230 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.074253 kubelet[2884]: W1212 19:42:27.074250 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.074253 kubelet[2884]: E1212 19:42:27.074266 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.075434 kubelet[2884]: E1212 19:42:27.075079 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.075434 kubelet[2884]: W1212 19:42:27.075429 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.075731 kubelet[2884]: E1212 19:42:27.075447 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.076383 kubelet[2884]: E1212 19:42:27.076360 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.076383 kubelet[2884]: W1212 19:42:27.076380 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.076823 kubelet[2884]: E1212 19:42:27.076400 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.077299 kubelet[2884]: E1212 19:42:27.077273 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.077299 kubelet[2884]: W1212 19:42:27.077294 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.077558 kubelet[2884]: E1212 19:42:27.077310 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.077854 kubelet[2884]: E1212 19:42:27.077834 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.077854 kubelet[2884]: W1212 19:42:27.077853 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.078506 kubelet[2884]: E1212 19:42:27.077869 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.078717 kubelet[2884]: E1212 19:42:27.078582 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.078717 kubelet[2884]: W1212 19:42:27.078601 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.078717 kubelet[2884]: E1212 19:42:27.078617 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.079668 kubelet[2884]: E1212 19:42:27.079458 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.079668 kubelet[2884]: W1212 19:42:27.079666 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.079800 kubelet[2884]: E1212 19:42:27.079683 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.080403 kubelet[2884]: E1212 19:42:27.080381 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.080403 kubelet[2884]: W1212 19:42:27.080401 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.081217 kubelet[2884]: E1212 19:42:27.080418 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.081674 kubelet[2884]: E1212 19:42:27.081641 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.081674 kubelet[2884]: W1212 19:42:27.081671 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.081829 kubelet[2884]: E1212 19:42:27.081688 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.082021 kubelet[2884]: E1212 19:42:27.082001 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.082021 kubelet[2884]: W1212 19:42:27.082021 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.082193 kubelet[2884]: E1212 19:42:27.082037 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.082990 kubelet[2884]: E1212 19:42:27.082969 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.082990 kubelet[2884]: W1212 19:42:27.082988 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.083269 kubelet[2884]: E1212 19:42:27.083005 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.084467 kubelet[2884]: E1212 19:42:27.084443 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.084467 kubelet[2884]: W1212 19:42:27.084464 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.084606 kubelet[2884]: E1212 19:42:27.084481 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.085024 kubelet[2884]: E1212 19:42:27.084998 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.085024 kubelet[2884]: W1212 19:42:27.085021 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.085244 kubelet[2884]: E1212 19:42:27.085037 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.085730 kubelet[2884]: E1212 19:42:27.085675 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.085730 kubelet[2884]: W1212 19:42:27.085711 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.085730 kubelet[2884]: E1212 19:42:27.085727 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.087066 kubelet[2884]: E1212 19:42:27.087034 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.087066 kubelet[2884]: W1212 19:42:27.087060 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.087296 kubelet[2884]: E1212 19:42:27.087078 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.087717 kubelet[2884]: E1212 19:42:27.087569 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.087797 kubelet[2884]: W1212 19:42:27.087718 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.087797 kubelet[2884]: E1212 19:42:27.087739 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.088651 kubelet[2884]: E1212 19:42:27.088617 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.088651 kubelet[2884]: W1212 19:42:27.088642 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.089001 kubelet[2884]: E1212 19:42:27.088658 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.090121 kubelet[2884]: E1212 19:42:27.089545 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.090121 kubelet[2884]: W1212 19:42:27.089566 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.090121 kubelet[2884]: E1212 19:42:27.089584 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.090121 kubelet[2884]: I1212 19:42:27.089621 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbfhg\" (UniqueName: \"kubernetes.io/projected/9bfee0fd-b637-401b-8c2c-b95c13a62022-kube-api-access-qbfhg\") pod \"csi-node-driver-v968r\" (UID: \"9bfee0fd-b637-401b-8c2c-b95c13a62022\") " pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:27.090121 kubelet[2884]: E1212 19:42:27.090060 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.090121 kubelet[2884]: W1212 19:42:27.090077 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.090425 kubelet[2884]: E1212 19:42:27.090129 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.090425 kubelet[2884]: I1212 19:42:27.090167 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9bfee0fd-b637-401b-8c2c-b95c13a62022-registration-dir\") pod \"csi-node-driver-v968r\" (UID: \"9bfee0fd-b637-401b-8c2c-b95c13a62022\") " pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:27.091042 kubelet[2884]: E1212 19:42:27.090658 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.091042 kubelet[2884]: W1212 19:42:27.090681 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.091042 kubelet[2884]: E1212 19:42:27.090699 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.091042 kubelet[2884]: I1212 19:42:27.090721 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bfee0fd-b637-401b-8c2c-b95c13a62022-kubelet-dir\") pod \"csi-node-driver-v968r\" (UID: \"9bfee0fd-b637-401b-8c2c-b95c13a62022\") " pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:27.092383 kubelet[2884]: E1212 19:42:27.092353 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.092383 kubelet[2884]: W1212 19:42:27.092376 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.092581 kubelet[2884]: E1212 19:42:27.092413 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.093274 kubelet[2884]: I1212 19:42:27.092445 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9bfee0fd-b637-401b-8c2c-b95c13a62022-socket-dir\") pod \"csi-node-driver-v968r\" (UID: \"9bfee0fd-b637-401b-8c2c-b95c13a62022\") " pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:27.093274 kubelet[2884]: E1212 19:42:27.093205 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.093274 kubelet[2884]: W1212 19:42:27.093273 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.093972 kubelet[2884]: E1212 19:42:27.093289 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.093972 kubelet[2884]: E1212 19:42:27.093534 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.093972 kubelet[2884]: W1212 19:42:27.093547 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.093972 kubelet[2884]: E1212 19:42:27.093562 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.094267 kubelet[2884]: E1212 19:42:27.094000 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.094267 kubelet[2884]: W1212 19:42:27.094014 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.096126 kubelet[2884]: E1212 19:42:27.096071 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.097681 kubelet[2884]: E1212 19:42:27.097652 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.097681 kubelet[2884]: W1212 19:42:27.097679 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.097868 kubelet[2884]: E1212 19:42:27.097705 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.099168 kubelet[2884]: E1212 19:42:27.099121 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.099168 kubelet[2884]: W1212 19:42:27.099159 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.099320 kubelet[2884]: E1212 19:42:27.099204 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.099712 kubelet[2884]: E1212 19:42:27.099689 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.099781 kubelet[2884]: W1212 19:42:27.099713 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.099781 kubelet[2884]: E1212 19:42:27.099751 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.100867 kubelet[2884]: I1212 19:42:27.100717 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9bfee0fd-b637-401b-8c2c-b95c13a62022-varrun\") pod \"csi-node-driver-v968r\" (UID: \"9bfee0fd-b637-401b-8c2c-b95c13a62022\") " pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:27.101229 kubelet[2884]: E1212 19:42:27.101202 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.101229 kubelet[2884]: W1212 19:42:27.101224 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.101229 kubelet[2884]: E1212 19:42:27.101241 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.102199 kubelet[2884]: E1212 19:42:27.102170 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.102199 kubelet[2884]: W1212 19:42:27.102191 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.102332 kubelet[2884]: E1212 19:42:27.102207 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.104146 kubelet[2884]: E1212 19:42:27.104077 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.104146 kubelet[2884]: W1212 19:42:27.104127 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.104289 kubelet[2884]: E1212 19:42:27.104159 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.105547 kubelet[2884]: E1212 19:42:27.104472 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.105547 kubelet[2884]: W1212 19:42:27.104485 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.105547 kubelet[2884]: E1212 19:42:27.104501 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.105547 kubelet[2884]: E1212 19:42:27.105227 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.105547 kubelet[2884]: W1212 19:42:27.105242 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.105547 kubelet[2884]: E1212 19:42:27.105256 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.105850 containerd[1566]: time="2025-12-12T19:42:27.105405914Z" level=info msg="connecting to shim b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee" address="unix:///run/containerd/s/40397b560dfa9f68398d80bd93ca877a8c10075d829d6ab2d4120ce63b187016" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:27.173889 systemd[1]: Started cri-containerd-b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee.scope - libcontainer container b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee. Dec 12 19:42:27.208648 kubelet[2884]: E1212 19:42:27.208610 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.208648 kubelet[2884]: W1212 19:42:27.208641 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.209262 kubelet[2884]: E1212 19:42:27.208671 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.209262 kubelet[2884]: E1212 19:42:27.209009 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.209262 kubelet[2884]: W1212 19:42:27.209023 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.209262 kubelet[2884]: E1212 19:42:27.209048 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.210424 kubelet[2884]: E1212 19:42:27.210268 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.210424 kubelet[2884]: W1212 19:42:27.210292 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.210424 kubelet[2884]: E1212 19:42:27.210316 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.210855 kubelet[2884]: E1212 19:42:27.210718 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.210855 kubelet[2884]: W1212 19:42:27.210738 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.210855 kubelet[2884]: E1212 19:42:27.210755 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.211277 kubelet[2884]: E1212 19:42:27.211257 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.211624 kubelet[2884]: W1212 19:42:27.211367 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.211624 kubelet[2884]: E1212 19:42:27.211406 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.212419 kubelet[2884]: E1212 19:42:27.212386 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.212419 kubelet[2884]: W1212 19:42:27.212408 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.212579 kubelet[2884]: E1212 19:42:27.212432 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.213215 kubelet[2884]: E1212 19:42:27.212695 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.213215 kubelet[2884]: W1212 19:42:27.212708 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.213215 kubelet[2884]: E1212 19:42:27.212723 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.213215 kubelet[2884]: E1212 19:42:27.212968 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.213215 kubelet[2884]: W1212 19:42:27.212981 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.213215 kubelet[2884]: E1212 19:42:27.212995 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.213329 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215253 kubelet[2884]: W1212 19:42:27.213343 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.213377 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.213625 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215253 kubelet[2884]: W1212 19:42:27.213638 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.213671 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.213934 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215253 kubelet[2884]: W1212 19:42:27.213959 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.214019 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215253 kubelet[2884]: E1212 19:42:27.214330 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215709 kubelet[2884]: W1212 19:42:27.214343 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.215709 kubelet[2884]: E1212 19:42:27.214358 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215709 kubelet[2884]: E1212 19:42:27.214581 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215709 kubelet[2884]: W1212 19:42:27.214594 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.215709 kubelet[2884]: E1212 19:42:27.214607 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215709 kubelet[2884]: E1212 19:42:27.214806 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215709 kubelet[2884]: W1212 19:42:27.214822 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.215709 kubelet[2884]: E1212 19:42:27.214836 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.215709 kubelet[2884]: E1212 19:42:27.215205 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.215709 kubelet[2884]: W1212 19:42:27.215219 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215235 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215482 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.217282 kubelet[2884]: W1212 19:42:27.215495 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215517 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215728 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.217282 kubelet[2884]: W1212 19:42:27.215741 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215756 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215970 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.217282 kubelet[2884]: W1212 19:42:27.215982 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.217282 kubelet[2884]: E1212 19:42:27.215998 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.216235 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.220667 kubelet[2884]: W1212 19:42:27.216247 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.216261 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.216579 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.220667 kubelet[2884]: W1212 19:42:27.216592 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.216606 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.217496 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.220667 kubelet[2884]: W1212 19:42:27.217510 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.217525 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.220667 kubelet[2884]: E1212 19:42:27.218175 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.221112 kubelet[2884]: W1212 19:42:27.218188 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.221112 kubelet[2884]: E1212 19:42:27.218202 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.221112 kubelet[2884]: E1212 19:42:27.219172 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.221112 kubelet[2884]: W1212 19:42:27.219185 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.221112 kubelet[2884]: E1212 19:42:27.219201 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.221112 kubelet[2884]: E1212 19:42:27.220667 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.221112 kubelet[2884]: W1212 19:42:27.220684 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.221112 kubelet[2884]: E1212 19:42:27.220699 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.234684 kubelet[2884]: E1212 19:42:27.234160 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.234684 kubelet[2884]: W1212 19:42:27.234205 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.234684 kubelet[2884]: E1212 19:42:27.234233 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.239731 kubelet[2884]: E1212 19:42:27.239572 2884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:42:27.239731 kubelet[2884]: W1212 19:42:27.239616 2884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:42:27.239731 kubelet[2884]: E1212 19:42:27.239643 2884 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:42:27.279929 containerd[1566]: time="2025-12-12T19:42:27.279794839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pt92m,Uid:0f72b7c6-b7c2-4e20-96ec-d323c5be0331,Namespace:calico-system,Attempt:0,} returns sandbox id \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\"" Dec 12 19:42:27.284038 containerd[1566]: time="2025-12-12T19:42:27.283868548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 19:42:27.290725 containerd[1566]: time="2025-12-12T19:42:27.290662173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c8b85f9c-js598,Uid:279c60d0-06d3-4812-80e2-1da9fed9b10c,Namespace:calico-system,Attempt:0,} returns sandbox id \"28ab749fa1c73cd5160ee79b9f039f9b0c6245b6145e25ca41e5a7a8ca7df913\"" Dec 12 19:42:29.122327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645613211.mount: Deactivated successfully. Dec 12 19:42:29.238171 kubelet[2884]: E1212 19:42:29.238037 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:29.313375 containerd[1566]: time="2025-12-12T19:42:29.312299759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:29.313375 containerd[1566]: time="2025-12-12T19:42:29.313325649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 12 19:42:29.314224 containerd[1566]: time="2025-12-12T19:42:29.314172643Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:29.316667 containerd[1566]: time="2025-12-12T19:42:29.316628778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:29.317913 containerd[1566]: time="2025-12-12T19:42:29.317794285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.03352916s" Dec 12 19:42:29.318053 containerd[1566]: time="2025-12-12T19:42:29.318023115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 19:42:29.320325 containerd[1566]: time="2025-12-12T19:42:29.320283770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 19:42:29.327107 containerd[1566]: time="2025-12-12T19:42:29.325286521Z" level=info msg="CreateContainer within sandbox \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 19:42:29.341394 containerd[1566]: time="2025-12-12T19:42:29.341330479Z" level=info msg="Container f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:29.347706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213563873.mount: Deactivated successfully. Dec 12 19:42:29.354823 containerd[1566]: time="2025-12-12T19:42:29.354753094Z" level=info msg="CreateContainer within sandbox \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d\"" Dec 12 19:42:29.356176 containerd[1566]: time="2025-12-12T19:42:29.356142629Z" level=info msg="StartContainer for \"f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d\"" Dec 12 19:42:29.359102 containerd[1566]: time="2025-12-12T19:42:29.359063653Z" level=info msg="connecting to shim f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d" address="unix:///run/containerd/s/40397b560dfa9f68398d80bd93ca877a8c10075d829d6ab2d4120ce63b187016" protocol=ttrpc version=3 Dec 12 19:42:29.403354 systemd[1]: Started cri-containerd-f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d.scope - libcontainer container f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d. Dec 12 19:42:29.526366 containerd[1566]: time="2025-12-12T19:42:29.526308723Z" level=info msg="StartContainer for \"f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d\" returns successfully" Dec 12 19:42:29.536384 systemd[1]: cri-containerd-f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d.scope: Deactivated successfully. Dec 12 19:42:29.568303 containerd[1566]: time="2025-12-12T19:42:29.568197586Z" level=info msg="received container exit event container_id:\"f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d\" id:\"f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d\" pid:3482 exited_at:{seconds:1765568549 nanos:538600487}" Dec 12 19:42:30.014978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f90ab26b241cd75b8dd0c381c91e90e005994bbef747f17dac28c0f11627f86d-rootfs.mount: Deactivated successfully. Dec 12 19:42:31.248148 kubelet[2884]: E1212 19:42:31.246126 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:32.611120 containerd[1566]: time="2025-12-12T19:42:32.610956516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:32.612866 containerd[1566]: time="2025-12-12T19:42:32.612611365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Dec 12 19:42:32.613740 containerd[1566]: time="2025-12-12T19:42:32.613700660Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:32.616435 containerd[1566]: time="2025-12-12T19:42:32.616394372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:32.617457 containerd[1566]: time="2025-12-12T19:42:32.617412985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.29708656s" Dec 12 19:42:32.617551 containerd[1566]: time="2025-12-12T19:42:32.617461993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 19:42:32.619269 containerd[1566]: time="2025-12-12T19:42:32.619225592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 19:42:32.643723 containerd[1566]: time="2025-12-12T19:42:32.643643763Z" level=info msg="CreateContainer within sandbox \"28ab749fa1c73cd5160ee79b9f039f9b0c6245b6145e25ca41e5a7a8ca7df913\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 19:42:32.655780 containerd[1566]: time="2025-12-12T19:42:32.653448493Z" level=info msg="Container b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:32.667210 containerd[1566]: time="2025-12-12T19:42:32.667166597Z" level=info msg="CreateContainer within sandbox \"28ab749fa1c73cd5160ee79b9f039f9b0c6245b6145e25ca41e5a7a8ca7df913\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956\"" Dec 12 19:42:32.669244 containerd[1566]: time="2025-12-12T19:42:32.669213053Z" level=info msg="StartContainer for \"b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956\"" Dec 12 19:42:32.670779 containerd[1566]: time="2025-12-12T19:42:32.670744311Z" level=info msg="connecting to shim b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956" address="unix:///run/containerd/s/12c74fa3f54d4edf43387fc3a0cc9bac0283405937eb5a35ebe502e5635adbf7" protocol=ttrpc version=3 Dec 12 19:42:32.713369 systemd[1]: Started cri-containerd-b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956.scope - libcontainer container b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956. Dec 12 19:42:32.812003 containerd[1566]: time="2025-12-12T19:42:32.811736259Z" level=info msg="StartContainer for \"b17a9f502aca47f74a2542c78896983286a8ba9433fbd8cb0d9b1137d7386956\" returns successfully" Dec 12 19:42:33.237026 kubelet[2884]: E1212 19:42:33.236948 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:33.771535 systemd[1]: Started sshd@12-10.244.20.246:22-157.245.76.79:37650.service - OpenSSH per-connection server daemon (157.245.76.79:37650). Dec 12 19:42:34.262003 sshd[3561]: Invalid user webmaster from 157.245.76.79 port 37650 Dec 12 19:42:34.332746 sshd[3561]: Connection closed by invalid user webmaster 157.245.76.79 port 37650 [preauth] Dec 12 19:42:34.336193 systemd[1]: sshd@12-10.244.20.246:22-157.245.76.79:37650.service: Deactivated successfully. Dec 12 19:42:34.463042 kubelet[2884]: I1212 19:42:34.462996 2884 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 19:42:35.239113 kubelet[2884]: E1212 19:42:35.236794 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:37.242854 kubelet[2884]: E1212 19:42:37.242803 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:37.292236 kubelet[2884]: I1212 19:42:37.292183 2884 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 19:42:37.335952 kubelet[2884]: I1212 19:42:37.335860 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78c8b85f9c-js598" podStartSLOduration=6.010387013 podStartE2EDuration="11.334847662s" podCreationTimestamp="2025-12-12 19:42:26 +0000 UTC" firstStartedPulling="2025-12-12 19:42:27.294204302 +0000 UTC m=+26.360332374" lastFinishedPulling="2025-12-12 19:42:32.61866494 +0000 UTC m=+31.684793023" observedRunningTime="2025-12-12 19:42:33.478839463 +0000 UTC m=+32.544967549" watchObservedRunningTime="2025-12-12 19:42:37.334847662 +0000 UTC m=+36.400975740" Dec 12 19:42:37.660142 containerd[1566]: time="2025-12-12T19:42:37.659184267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:37.661689 containerd[1566]: time="2025-12-12T19:42:37.661646875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 19:42:37.663240 containerd[1566]: time="2025-12-12T19:42:37.663201480Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:37.666195 containerd[1566]: time="2025-12-12T19:42:37.666136913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:37.667777 containerd[1566]: time="2025-12-12T19:42:37.667741142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.048469272s" Dec 12 19:42:37.667907 containerd[1566]: time="2025-12-12T19:42:37.667881305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 19:42:37.670447 containerd[1566]: time="2025-12-12T19:42:37.670414508Z" level=info msg="CreateContainer within sandbox \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 19:42:37.686451 containerd[1566]: time="2025-12-12T19:42:37.686377336Z" level=info msg="Container f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:37.695478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599454706.mount: Deactivated successfully. Dec 12 19:42:37.706070 containerd[1566]: time="2025-12-12T19:42:37.703905479Z" level=info msg="CreateContainer within sandbox \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb\"" Dec 12 19:42:37.707880 containerd[1566]: time="2025-12-12T19:42:37.707799087Z" level=info msg="StartContainer for \"f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb\"" Dec 12 19:42:37.711217 containerd[1566]: time="2025-12-12T19:42:37.711167780Z" level=info msg="connecting to shim f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb" address="unix:///run/containerd/s/40397b560dfa9f68398d80bd93ca877a8c10075d829d6ab2d4120ce63b187016" protocol=ttrpc version=3 Dec 12 19:42:37.750356 systemd[1]: Started cri-containerd-f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb.scope - libcontainer container f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb. Dec 12 19:42:37.867788 containerd[1566]: time="2025-12-12T19:42:37.867732191Z" level=info msg="StartContainer for \"f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb\" returns successfully" Dec 12 19:42:38.988644 systemd[1]: cri-containerd-f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb.scope: Deactivated successfully. Dec 12 19:42:38.990237 systemd[1]: cri-containerd-f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb.scope: Consumed 800ms CPU time, 165.3M memory peak, 9.3M read from disk, 171.3M written to disk. Dec 12 19:42:39.004915 containerd[1566]: time="2025-12-12T19:42:39.004687804Z" level=info msg="received container exit event container_id:\"f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb\" id:\"f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb\" pid:3592 exited_at:{seconds:1765568559 nanos:4168448}" Dec 12 19:42:39.063554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9abd795152a22b52fbaaab19b9afafc806072050f122011f22b343b509083eb-rootfs.mount: Deactivated successfully. Dec 12 19:42:39.075678 kubelet[2884]: I1212 19:42:39.075620 2884 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 19:42:39.221280 kubelet[2884]: I1212 19:42:39.221036 2884 status_manager.go:890] "Failed to get status for pod" podUID="61f7b8fe-bbd8-4326-bc5c-e785765bcc23" pod="kube-system/coredns-668d6bf9bc-kk6xs" err="pods \"coredns-668d6bf9bc-kk6xs\" is forbidden: User \"system:node:srv-tupcq.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-tupcq.gb1.brightbox.com' and this object" Dec 12 19:42:39.226231 kubelet[2884]: W1212 19:42:39.223842 2884 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-tupcq.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-tupcq.gb1.brightbox.com' and this object Dec 12 19:42:39.226712 kubelet[2884]: E1212 19:42:39.226261 2884 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-tupcq.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-tupcq.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 12 19:42:39.230498 systemd[1]: Created slice kubepods-burstable-pod61f7b8fe_bbd8_4326_bc5c_e785765bcc23.slice - libcontainer container kubepods-burstable-pod61f7b8fe_bbd8_4326_bc5c_e785765bcc23.slice. Dec 12 19:42:39.259682 systemd[1]: Created slice kubepods-besteffort-pod95916008_465f_4755_98cd_82437c8d75be.slice - libcontainer container kubepods-besteffort-pod95916008_465f_4755_98cd_82437c8d75be.slice. Dec 12 19:42:39.272970 systemd[1]: Created slice kubepods-besteffort-podfa81211e_8b3a_4af8_b6e2_d28a7d96f939.slice - libcontainer container kubepods-besteffort-podfa81211e_8b3a_4af8_b6e2_d28a7d96f939.slice. Dec 12 19:42:39.287061 systemd[1]: Created slice kubepods-besteffort-poda54d3a6c_07c3_4ee2_a301_dcc61165df66.slice - libcontainer container kubepods-besteffort-poda54d3a6c_07c3_4ee2_a301_dcc61165df66.slice. Dec 12 19:42:39.300429 systemd[1]: Created slice kubepods-burstable-podd836babe_8f7d_4346_8873_331eb853865c.slice - libcontainer container kubepods-burstable-podd836babe_8f7d_4346_8873_331eb853865c.slice. Dec 12 19:42:39.317701 systemd[1]: Created slice kubepods-besteffort-podcb91d52e_eadb_42e8_8836_c86b003fbe7b.slice - libcontainer container kubepods-besteffort-podcb91d52e_eadb_42e8_8836_c86b003fbe7b.slice. Dec 12 19:42:39.322806 kubelet[2884]: I1212 19:42:39.321862 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhfq5\" (UniqueName: \"kubernetes.io/projected/fa81211e-8b3a-4af8-b6e2-d28a7d96f939-kube-api-access-bhfq5\") pod \"calico-apiserver-76c8ff9cd8-ml9g6\" (UID: \"fa81211e-8b3a-4af8-b6e2-d28a7d96f939\") " pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" Dec 12 19:42:39.323712 kubelet[2884]: I1212 19:42:39.322989 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-ca-bundle\") pod \"whisker-56bb7f8f79-hm2qk\" (UID: \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\") " pod="calico-system/whisker-56bb7f8f79-hm2qk" Dec 12 19:42:39.324401 kubelet[2884]: I1212 19:42:39.324348 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbvq\" (UniqueName: \"kubernetes.io/projected/a54d3a6c-07c3-4ee2-a301-dcc61165df66-kube-api-access-dhbvq\") pod \"whisker-56bb7f8f79-hm2qk\" (UID: \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\") " pod="calico-system/whisker-56bb7f8f79-hm2qk" Dec 12 19:42:39.324481 kubelet[2884]: I1212 19:42:39.324410 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr58l\" (UniqueName: \"kubernetes.io/projected/cb91d52e-eadb-42e8-8836-c86b003fbe7b-kube-api-access-cr58l\") pod \"calico-apiserver-76c8ff9cd8-rsckc\" (UID: \"cb91d52e-eadb-42e8-8836-c86b003fbe7b\") " pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" Dec 12 19:42:39.324481 kubelet[2884]: I1212 19:42:39.324446 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e1f5ae0-5750-4ed0-9230-cd71bbf186d8-config\") pod \"goldmane-666569f655-6dtpp\" (UID: \"9e1f5ae0-5750-4ed0-9230-cd71bbf186d8\") " pod="calico-system/goldmane-666569f655-6dtpp" Dec 12 19:42:39.324585 kubelet[2884]: I1212 19:42:39.324484 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2msb\" (UniqueName: \"kubernetes.io/projected/95916008-465f-4755-98cd-82437c8d75be-kube-api-access-f2msb\") pod \"calico-kube-controllers-56475989c-wt7ld\" (UID: \"95916008-465f-4755-98cd-82437c8d75be\") " pod="calico-system/calico-kube-controllers-56475989c-wt7ld" Dec 12 19:42:39.325381 kubelet[2884]: I1212 19:42:39.325171 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e1f5ae0-5750-4ed0-9230-cd71bbf186d8-goldmane-ca-bundle\") pod \"goldmane-666569f655-6dtpp\" (UID: \"9e1f5ae0-5750-4ed0-9230-cd71bbf186d8\") " pod="calico-system/goldmane-666569f655-6dtpp" Dec 12 19:42:39.325381 kubelet[2884]: I1212 19:42:39.325222 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cb91d52e-eadb-42e8-8836-c86b003fbe7b-calico-apiserver-certs\") pod \"calico-apiserver-76c8ff9cd8-rsckc\" (UID: \"cb91d52e-eadb-42e8-8836-c86b003fbe7b\") " pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" Dec 12 19:42:39.325776 kubelet[2884]: I1212 19:42:39.325259 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8z6h\" (UniqueName: \"kubernetes.io/projected/61f7b8fe-bbd8-4326-bc5c-e785765bcc23-kube-api-access-v8z6h\") pod \"coredns-668d6bf9bc-kk6xs\" (UID: \"61f7b8fe-bbd8-4326-bc5c-e785765bcc23\") " pod="kube-system/coredns-668d6bf9bc-kk6xs" Dec 12 19:42:39.325776 kubelet[2884]: I1212 19:42:39.325564 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95916008-465f-4755-98cd-82437c8d75be-tigera-ca-bundle\") pod \"calico-kube-controllers-56475989c-wt7ld\" (UID: \"95916008-465f-4755-98cd-82437c8d75be\") " pod="calico-system/calico-kube-controllers-56475989c-wt7ld" Dec 12 19:42:39.325776 kubelet[2884]: I1212 19:42:39.325609 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa81211e-8b3a-4af8-b6e2-d28a7d96f939-calico-apiserver-certs\") pod \"calico-apiserver-76c8ff9cd8-ml9g6\" (UID: \"fa81211e-8b3a-4af8-b6e2-d28a7d96f939\") " pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" Dec 12 19:42:39.325776 kubelet[2884]: I1212 19:42:39.325637 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d836babe-8f7d-4346-8873-331eb853865c-config-volume\") pod \"coredns-668d6bf9bc-t95fc\" (UID: \"d836babe-8f7d-4346-8873-331eb853865c\") " pod="kube-system/coredns-668d6bf9bc-t95fc" Dec 12 19:42:39.326182 kubelet[2884]: I1212 19:42:39.326155 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdbr\" (UniqueName: \"kubernetes.io/projected/9e1f5ae0-5750-4ed0-9230-cd71bbf186d8-kube-api-access-zmdbr\") pod \"goldmane-666569f655-6dtpp\" (UID: \"9e1f5ae0-5750-4ed0-9230-cd71bbf186d8\") " pod="calico-system/goldmane-666569f655-6dtpp" Dec 12 19:42:39.326766 kubelet[2884]: I1212 19:42:39.326707 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61f7b8fe-bbd8-4326-bc5c-e785765bcc23-config-volume\") pod \"coredns-668d6bf9bc-kk6xs\" (UID: \"61f7b8fe-bbd8-4326-bc5c-e785765bcc23\") " pod="kube-system/coredns-668d6bf9bc-kk6xs" Dec 12 19:42:39.327754 kubelet[2884]: I1212 19:42:39.327728 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-backend-key-pair\") pod \"whisker-56bb7f8f79-hm2qk\" (UID: \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\") " pod="calico-system/whisker-56bb7f8f79-hm2qk" Dec 12 19:42:39.328202 kubelet[2884]: I1212 19:42:39.328166 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8x9j\" (UniqueName: \"kubernetes.io/projected/d836babe-8f7d-4346-8873-331eb853865c-kube-api-access-q8x9j\") pod \"coredns-668d6bf9bc-t95fc\" (UID: \"d836babe-8f7d-4346-8873-331eb853865c\") " pod="kube-system/coredns-668d6bf9bc-t95fc" Dec 12 19:42:39.328287 kubelet[2884]: I1212 19:42:39.328219 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9e1f5ae0-5750-4ed0-9230-cd71bbf186d8-goldmane-key-pair\") pod \"goldmane-666569f655-6dtpp\" (UID: \"9e1f5ae0-5750-4ed0-9230-cd71bbf186d8\") " pod="calico-system/goldmane-666569f655-6dtpp" Dec 12 19:42:39.333928 systemd[1]: Created slice kubepods-besteffort-pod9bfee0fd_b637_401b_8c2c_b95c13a62022.slice - libcontainer container kubepods-besteffort-pod9bfee0fd_b637_401b_8c2c_b95c13a62022.slice. Dec 12 19:42:39.341495 containerd[1566]: time="2025-12-12T19:42:39.340719412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v968r,Uid:9bfee0fd-b637-401b-8c2c-b95c13a62022,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:39.350566 systemd[1]: Created slice kubepods-besteffort-pod9e1f5ae0_5750_4ed0_9230_cd71bbf186d8.slice - libcontainer container kubepods-besteffort-pod9e1f5ae0_5750_4ed0_9230_cd71bbf186d8.slice. Dec 12 19:42:39.573194 containerd[1566]: time="2025-12-12T19:42:39.573008940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 19:42:39.574203 containerd[1566]: time="2025-12-12T19:42:39.573645958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56475989c-wt7ld,Uid:95916008-465f-4755-98cd-82437c8d75be,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:39.581021 containerd[1566]: time="2025-12-12T19:42:39.580934319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-ml9g6,Uid:fa81211e-8b3a-4af8-b6e2-d28a7d96f939,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:42:39.607575 containerd[1566]: time="2025-12-12T19:42:39.607313134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bb7f8f79-hm2qk,Uid:a54d3a6c-07c3-4ee2-a301-dcc61165df66,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:39.665016 containerd[1566]: time="2025-12-12T19:42:39.664948896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-rsckc,Uid:cb91d52e-eadb-42e8-8836-c86b003fbe7b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:42:39.680883 containerd[1566]: time="2025-12-12T19:42:39.679994141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6dtpp,Uid:9e1f5ae0-5750-4ed0-9230-cd71bbf186d8,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:39.804144 containerd[1566]: time="2025-12-12T19:42:39.803572976Z" level=error msg="Failed to destroy network for sandbox \"babd077ccc0f2eef839af3a8aff962c82b39d2921d8ef99bee96c3d2d61c3497\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.853218 containerd[1566]: time="2025-12-12T19:42:39.808580703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v968r,Uid:9bfee0fd-b637-401b-8c2c-b95c13a62022,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd077ccc0f2eef839af3a8aff962c82b39d2921d8ef99bee96c3d2d61c3497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.856978 kubelet[2884]: E1212 19:42:39.854381 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd077ccc0f2eef839af3a8aff962c82b39d2921d8ef99bee96c3d2d61c3497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.856978 kubelet[2884]: E1212 19:42:39.854515 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd077ccc0f2eef839af3a8aff962c82b39d2921d8ef99bee96c3d2d61c3497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:39.856978 kubelet[2884]: E1212 19:42:39.854574 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd077ccc0f2eef839af3a8aff962c82b39d2921d8ef99bee96c3d2d61c3497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v968r" Dec 12 19:42:39.860366 kubelet[2884]: E1212 19:42:39.854669 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"babd077ccc0f2eef839af3a8aff962c82b39d2921d8ef99bee96c3d2d61c3497\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:39.875190 containerd[1566]: time="2025-12-12T19:42:39.875078413Z" level=error msg="Failed to destroy network for sandbox \"dd27175737043921630dd6ab12ecdcd926286500f0d88500911b168e3de0d859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.879421 containerd[1566]: time="2025-12-12T19:42:39.879365044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-ml9g6,Uid:fa81211e-8b3a-4af8-b6e2-d28a7d96f939,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd27175737043921630dd6ab12ecdcd926286500f0d88500911b168e3de0d859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.879892 kubelet[2884]: E1212 19:42:39.879808 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd27175737043921630dd6ab12ecdcd926286500f0d88500911b168e3de0d859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.879978 kubelet[2884]: E1212 19:42:39.879938 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd27175737043921630dd6ab12ecdcd926286500f0d88500911b168e3de0d859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" Dec 12 19:42:39.880033 kubelet[2884]: E1212 19:42:39.879978 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd27175737043921630dd6ab12ecdcd926286500f0d88500911b168e3de0d859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" Dec 12 19:42:39.880198 kubelet[2884]: E1212 19:42:39.880079 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c8ff9cd8-ml9g6_calico-apiserver(fa81211e-8b3a-4af8-b6e2-d28a7d96f939)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c8ff9cd8-ml9g6_calico-apiserver(fa81211e-8b3a-4af8-b6e2-d28a7d96f939)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd27175737043921630dd6ab12ecdcd926286500f0d88500911b168e3de0d859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:42:39.902029 containerd[1566]: time="2025-12-12T19:42:39.901962966Z" level=error msg="Failed to destroy network for sandbox \"afdb2a20ea90e98e1c8465fb3ecda41a0081a6b759afef10050c586984380b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.904160 containerd[1566]: time="2025-12-12T19:42:39.904115650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56475989c-wt7ld,Uid:95916008-465f-4755-98cd-82437c8d75be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afdb2a20ea90e98e1c8465fb3ecda41a0081a6b759afef10050c586984380b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.904678 kubelet[2884]: E1212 19:42:39.904561 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afdb2a20ea90e98e1c8465fb3ecda41a0081a6b759afef10050c586984380b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.904678 kubelet[2884]: E1212 19:42:39.904645 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afdb2a20ea90e98e1c8465fb3ecda41a0081a6b759afef10050c586984380b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" Dec 12 19:42:39.904909 kubelet[2884]: E1212 19:42:39.904673 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afdb2a20ea90e98e1c8465fb3ecda41a0081a6b759afef10050c586984380b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" Dec 12 19:42:39.904909 kubelet[2884]: E1212 19:42:39.904768 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56475989c-wt7ld_calico-system(95916008-465f-4755-98cd-82437c8d75be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56475989c-wt7ld_calico-system(95916008-465f-4755-98cd-82437c8d75be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afdb2a20ea90e98e1c8465fb3ecda41a0081a6b759afef10050c586984380b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:42:39.936140 containerd[1566]: time="2025-12-12T19:42:39.936006587Z" level=error msg="Failed to destroy network for sandbox \"45a3d3833963b324ae1c27e2e4c9d336405b893f77422ec8d5ffe014e13e382b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.938930 containerd[1566]: time="2025-12-12T19:42:39.938847156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-rsckc,Uid:cb91d52e-eadb-42e8-8836-c86b003fbe7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a3d3833963b324ae1c27e2e4c9d336405b893f77422ec8d5ffe014e13e382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.940453 kubelet[2884]: E1212 19:42:39.940404 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a3d3833963b324ae1c27e2e4c9d336405b893f77422ec8d5ffe014e13e382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.940580 kubelet[2884]: E1212 19:42:39.940485 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a3d3833963b324ae1c27e2e4c9d336405b893f77422ec8d5ffe014e13e382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" Dec 12 19:42:39.940580 kubelet[2884]: E1212 19:42:39.940517 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a3d3833963b324ae1c27e2e4c9d336405b893f77422ec8d5ffe014e13e382b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" Dec 12 19:42:39.940687 kubelet[2884]: E1212 19:42:39.940582 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c8ff9cd8-rsckc_calico-apiserver(cb91d52e-eadb-42e8-8836-c86b003fbe7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c8ff9cd8-rsckc_calico-apiserver(cb91d52e-eadb-42e8-8836-c86b003fbe7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a3d3833963b324ae1c27e2e4c9d336405b893f77422ec8d5ffe014e13e382b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:42:39.951352 containerd[1566]: time="2025-12-12T19:42:39.950802748Z" level=error msg="Failed to destroy network for sandbox \"445008ff4fcef4e0be8be7099c87e199f1a2d04a102cff284508740899c2ab54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.952994 containerd[1566]: time="2025-12-12T19:42:39.951881383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bb7f8f79-hm2qk,Uid:a54d3a6c-07c3-4ee2-a301-dcc61165df66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"445008ff4fcef4e0be8be7099c87e199f1a2d04a102cff284508740899c2ab54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.953178 kubelet[2884]: E1212 19:42:39.952269 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445008ff4fcef4e0be8be7099c87e199f1a2d04a102cff284508740899c2ab54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.953178 kubelet[2884]: E1212 19:42:39.952356 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445008ff4fcef4e0be8be7099c87e199f1a2d04a102cff284508740899c2ab54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56bb7f8f79-hm2qk" Dec 12 19:42:39.953178 kubelet[2884]: E1212 19:42:39.952413 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445008ff4fcef4e0be8be7099c87e199f1a2d04a102cff284508740899c2ab54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56bb7f8f79-hm2qk" Dec 12 19:42:39.953363 kubelet[2884]: E1212 19:42:39.952492 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56bb7f8f79-hm2qk_calico-system(a54d3a6c-07c3-4ee2-a301-dcc61165df66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56bb7f8f79-hm2qk_calico-system(a54d3a6c-07c3-4ee2-a301-dcc61165df66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445008ff4fcef4e0be8be7099c87e199f1a2d04a102cff284508740899c2ab54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56bb7f8f79-hm2qk" podUID="a54d3a6c-07c3-4ee2-a301-dcc61165df66" Dec 12 19:42:39.980452 containerd[1566]: time="2025-12-12T19:42:39.980333331Z" level=error msg="Failed to destroy network for sandbox \"5246283799dff3d8c730c0e16e735b8a78de20ebd498cab80e7174fc2da47014\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.987825 containerd[1566]: time="2025-12-12T19:42:39.987712439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6dtpp,Uid:9e1f5ae0-5750-4ed0-9230-cd71bbf186d8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246283799dff3d8c730c0e16e735b8a78de20ebd498cab80e7174fc2da47014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.988469 kubelet[2884]: E1212 19:42:39.988391 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246283799dff3d8c730c0e16e735b8a78de20ebd498cab80e7174fc2da47014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:39.988629 kubelet[2884]: E1212 19:42:39.988593 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246283799dff3d8c730c0e16e735b8a78de20ebd498cab80e7174fc2da47014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6dtpp" Dec 12 19:42:39.988723 kubelet[2884]: E1212 19:42:39.988640 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5246283799dff3d8c730c0e16e735b8a78de20ebd498cab80e7174fc2da47014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6dtpp" Dec 12 19:42:39.988825 kubelet[2884]: E1212 19:42:39.988742 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-6dtpp_calico-system(9e1f5ae0-5750-4ed0-9230-cd71bbf186d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-6dtpp_calico-system(9e1f5ae0-5750-4ed0-9230-cd71bbf186d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5246283799dff3d8c730c0e16e735b8a78de20ebd498cab80e7174fc2da47014\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:42:40.075491 systemd[1]: run-netns-cni\x2df8218044\x2de5a4\x2d6ef2\x2ddd1e\x2d29364e85ed68.mount: Deactivated successfully. Dec 12 19:42:40.432536 kubelet[2884]: E1212 19:42:40.432362 2884 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 12 19:42:40.433626 kubelet[2884]: E1212 19:42:40.433275 2884 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61f7b8fe-bbd8-4326-bc5c-e785765bcc23-config-volume podName:61f7b8fe-bbd8-4326-bc5c-e785765bcc23 nodeName:}" failed. No retries permitted until 2025-12-12 19:42:40.933228807 +0000 UTC m=+39.999356893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/61f7b8fe-bbd8-4326-bc5c-e785765bcc23-config-volume") pod "coredns-668d6bf9bc-kk6xs" (UID: "61f7b8fe-bbd8-4326-bc5c-e785765bcc23") : failed to sync configmap cache: timed out waiting for the condition Dec 12 19:42:40.457580 kubelet[2884]: E1212 19:42:40.457408 2884 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 12 19:42:40.457580 kubelet[2884]: E1212 19:42:40.457556 2884 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d836babe-8f7d-4346-8873-331eb853865c-config-volume podName:d836babe-8f7d-4346-8873-331eb853865c nodeName:}" failed. No retries permitted until 2025-12-12 19:42:40.957526288 +0000 UTC m=+40.023654364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d836babe-8f7d-4346-8873-331eb853865c-config-volume") pod "coredns-668d6bf9bc-t95fc" (UID: "d836babe-8f7d-4346-8873-331eb853865c") : failed to sync configmap cache: timed out waiting for the condition Dec 12 19:42:41.052560 containerd[1566]: time="2025-12-12T19:42:41.052369165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kk6xs,Uid:61f7b8fe-bbd8-4326-bc5c-e785765bcc23,Namespace:kube-system,Attempt:0,}" Dec 12 19:42:41.114701 containerd[1566]: time="2025-12-12T19:42:41.114426665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t95fc,Uid:d836babe-8f7d-4346-8873-331eb853865c,Namespace:kube-system,Attempt:0,}" Dec 12 19:42:41.195449 containerd[1566]: time="2025-12-12T19:42:41.195322273Z" level=error msg="Failed to destroy network for sandbox \"858dca3f89c74967d398c5964e5fd8be9b5b384fc96e7e2928c844ff831882d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:41.198919 containerd[1566]: time="2025-12-12T19:42:41.198665938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kk6xs,Uid:61f7b8fe-bbd8-4326-bc5c-e785765bcc23,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"858dca3f89c74967d398c5964e5fd8be9b5b384fc96e7e2928c844ff831882d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:41.199106 kubelet[2884]: E1212 19:42:41.198973 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"858dca3f89c74967d398c5964e5fd8be9b5b384fc96e7e2928c844ff831882d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:41.199106 kubelet[2884]: E1212 19:42:41.199062 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"858dca3f89c74967d398c5964e5fd8be9b5b384fc96e7e2928c844ff831882d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kk6xs" Dec 12 19:42:41.201275 kubelet[2884]: E1212 19:42:41.199216 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"858dca3f89c74967d398c5964e5fd8be9b5b384fc96e7e2928c844ff831882d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kk6xs" Dec 12 19:42:41.201275 kubelet[2884]: E1212 19:42:41.199309 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kk6xs_kube-system(61f7b8fe-bbd8-4326-bc5c-e785765bcc23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kk6xs_kube-system(61f7b8fe-bbd8-4326-bc5c-e785765bcc23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"858dca3f89c74967d398c5964e5fd8be9b5b384fc96e7e2928c844ff831882d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kk6xs" podUID="61f7b8fe-bbd8-4326-bc5c-e785765bcc23" Dec 12 19:42:41.200323 systemd[1]: run-netns-cni\x2ddb22c0a3\x2d2531\x2d3cc1\x2d551a\x2d939026eb1b3c.mount: Deactivated successfully. Dec 12 19:42:41.277129 containerd[1566]: time="2025-12-12T19:42:41.275503616Z" level=error msg="Failed to destroy network for sandbox \"24bce421c1b75c78851a6356508a05174d172d10d64918ddd3abd40cdbbffe22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:41.281285 containerd[1566]: time="2025-12-12T19:42:41.279416970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t95fc,Uid:d836babe-8f7d-4346-8873-331eb853865c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bce421c1b75c78851a6356508a05174d172d10d64918ddd3abd40cdbbffe22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:41.280474 systemd[1]: run-netns-cni\x2d27b007b0\x2d874b\x2d79e5\x2d8ba4\x2d04027c39c3a9.mount: Deactivated successfully. Dec 12 19:42:41.282688 kubelet[2884]: E1212 19:42:41.282042 2884 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bce421c1b75c78851a6356508a05174d172d10d64918ddd3abd40cdbbffe22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:42:41.282688 kubelet[2884]: E1212 19:42:41.282146 2884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bce421c1b75c78851a6356508a05174d172d10d64918ddd3abd40cdbbffe22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t95fc" Dec 12 19:42:41.282688 kubelet[2884]: E1212 19:42:41.282190 2884 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bce421c1b75c78851a6356508a05174d172d10d64918ddd3abd40cdbbffe22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t95fc" Dec 12 19:42:41.284526 kubelet[2884]: E1212 19:42:41.282282 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t95fc_kube-system(d836babe-8f7d-4346-8873-331eb853865c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t95fc_kube-system(d836babe-8f7d-4346-8873-331eb853865c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24bce421c1b75c78851a6356508a05174d172d10d64918ddd3abd40cdbbffe22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t95fc" podUID="d836babe-8f7d-4346-8873-331eb853865c" Dec 12 19:42:49.274371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335237974.mount: Deactivated successfully. Dec 12 19:42:49.357142 containerd[1566]: time="2025-12-12T19:42:49.344838340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:49.360668 containerd[1566]: time="2025-12-12T19:42:49.358146158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 19:42:49.382566 containerd[1566]: time="2025-12-12T19:42:49.382498258Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:49.385689 containerd[1566]: time="2025-12-12T19:42:49.385636386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:42:49.386660 containerd[1566]: time="2025-12-12T19:42:49.386617010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.813528685s" Dec 12 19:42:49.386750 containerd[1566]: time="2025-12-12T19:42:49.386665072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 19:42:49.412124 containerd[1566]: time="2025-12-12T19:42:49.410275899Z" level=info msg="CreateContainer within sandbox \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 19:42:49.503123 containerd[1566]: time="2025-12-12T19:42:49.499154662Z" level=info msg="Container 27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:49.505319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449493160.mount: Deactivated successfully. Dec 12 19:42:49.545223 containerd[1566]: time="2025-12-12T19:42:49.544921204Z" level=info msg="CreateContainer within sandbox \"b57ba1425cc80b60936598e0d21861dd10c33907e9a577c98cdd61969f2812ee\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c\"" Dec 12 19:42:49.546941 containerd[1566]: time="2025-12-12T19:42:49.546454654Z" level=info msg="StartContainer for \"27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c\"" Dec 12 19:42:49.551912 containerd[1566]: time="2025-12-12T19:42:49.551857484Z" level=info msg="connecting to shim 27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c" address="unix:///run/containerd/s/40397b560dfa9f68398d80bd93ca877a8c10075d829d6ab2d4120ce63b187016" protocol=ttrpc version=3 Dec 12 19:42:49.725358 systemd[1]: Started cri-containerd-27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c.scope - libcontainer container 27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c. Dec 12 19:42:49.898158 containerd[1566]: time="2025-12-12T19:42:49.898081656Z" level=info msg="StartContainer for \"27ed61c788b6f7580d8671f39c6023847bcb962fc40b9f084713cf2c6655105c\" returns successfully" Dec 12 19:42:50.223123 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 19:42:50.225786 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 19:42:50.629747 kubelet[2884]: I1212 19:42:50.629675 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-ca-bundle\") pod \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\" (UID: \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\") " Dec 12 19:42:50.632555 kubelet[2884]: I1212 19:42:50.629773 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhbvq\" (UniqueName: \"kubernetes.io/projected/a54d3a6c-07c3-4ee2-a301-dcc61165df66-kube-api-access-dhbvq\") pod \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\" (UID: \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\") " Dec 12 19:42:50.632555 kubelet[2884]: I1212 19:42:50.629812 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-backend-key-pair\") pod \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\" (UID: \"a54d3a6c-07c3-4ee2-a301-dcc61165df66\") " Dec 12 19:42:50.632555 kubelet[2884]: I1212 19:42:50.630702 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a54d3a6c-07c3-4ee2-a301-dcc61165df66" (UID: "a54d3a6c-07c3-4ee2-a301-dcc61165df66"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 19:42:50.636316 kubelet[2884]: I1212 19:42:50.636242 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a54d3a6c-07c3-4ee2-a301-dcc61165df66" (UID: "a54d3a6c-07c3-4ee2-a301-dcc61165df66"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 19:42:50.638230 systemd[1]: var-lib-kubelet-pods-a54d3a6c\x2d07c3\x2d4ee2\x2da301\x2ddcc61165df66-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 19:42:50.642213 systemd[1]: var-lib-kubelet-pods-a54d3a6c\x2d07c3\x2d4ee2\x2da301\x2ddcc61165df66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhbvq.mount: Deactivated successfully. Dec 12 19:42:50.645416 kubelet[2884]: I1212 19:42:50.645353 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54d3a6c-07c3-4ee2-a301-dcc61165df66-kube-api-access-dhbvq" (OuterVolumeSpecName: "kube-api-access-dhbvq") pod "a54d3a6c-07c3-4ee2-a301-dcc61165df66" (UID: "a54d3a6c-07c3-4ee2-a301-dcc61165df66"). InnerVolumeSpecName "kube-api-access-dhbvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 19:42:50.731004 kubelet[2884]: I1212 19:42:50.730875 2884 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhbvq\" (UniqueName: \"kubernetes.io/projected/a54d3a6c-07c3-4ee2-a301-dcc61165df66-kube-api-access-dhbvq\") on node \"srv-tupcq.gb1.brightbox.com\" DevicePath \"\"" Dec 12 19:42:50.731004 kubelet[2884]: I1212 19:42:50.730921 2884 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-backend-key-pair\") on node \"srv-tupcq.gb1.brightbox.com\" DevicePath \"\"" Dec 12 19:42:50.731004 kubelet[2884]: I1212 19:42:50.730952 2884 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54d3a6c-07c3-4ee2-a301-dcc61165df66-whisker-ca-bundle\") on node \"srv-tupcq.gb1.brightbox.com\" DevicePath \"\"" Dec 12 19:42:50.735284 systemd[1]: Removed slice kubepods-besteffort-poda54d3a6c_07c3_4ee2_a301_dcc61165df66.slice - libcontainer container kubepods-besteffort-poda54d3a6c_07c3_4ee2_a301_dcc61165df66.slice. Dec 12 19:42:50.764158 kubelet[2884]: I1212 19:42:50.762241 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pt92m" podStartSLOduration=2.656663494 podStartE2EDuration="24.762209668s" podCreationTimestamp="2025-12-12 19:42:26 +0000 UTC" firstStartedPulling="2025-12-12 19:42:27.2823196 +0000 UTC m=+26.348447676" lastFinishedPulling="2025-12-12 19:42:49.387865777 +0000 UTC m=+48.453993850" observedRunningTime="2025-12-12 19:42:50.759459391 +0000 UTC m=+49.825587485" watchObservedRunningTime="2025-12-12 19:42:50.762209668 +0000 UTC m=+49.828337764" Dec 12 19:42:50.913419 systemd[1]: Created slice kubepods-besteffort-pod28a1d6ff_e4a7_417e_8d50_93082a2b90ea.slice - libcontainer container kubepods-besteffort-pod28a1d6ff_e4a7_417e_8d50_93082a2b90ea.slice. Dec 12 19:42:51.034601 kubelet[2884]: I1212 19:42:51.034534 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28a1d6ff-e4a7-417e-8d50-93082a2b90ea-whisker-ca-bundle\") pod \"whisker-995586667-ttmxg\" (UID: \"28a1d6ff-e4a7-417e-8d50-93082a2b90ea\") " pod="calico-system/whisker-995586667-ttmxg" Dec 12 19:42:51.034601 kubelet[2884]: I1212 19:42:51.034623 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28a1d6ff-e4a7-417e-8d50-93082a2b90ea-whisker-backend-key-pair\") pod \"whisker-995586667-ttmxg\" (UID: \"28a1d6ff-e4a7-417e-8d50-93082a2b90ea\") " pod="calico-system/whisker-995586667-ttmxg" Dec 12 19:42:51.034957 kubelet[2884]: I1212 19:42:51.034661 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcrd\" (UniqueName: \"kubernetes.io/projected/28a1d6ff-e4a7-417e-8d50-93082a2b90ea-kube-api-access-rxcrd\") pod \"whisker-995586667-ttmxg\" (UID: \"28a1d6ff-e4a7-417e-8d50-93082a2b90ea\") " pod="calico-system/whisker-995586667-ttmxg" Dec 12 19:42:51.223520 containerd[1566]: time="2025-12-12T19:42:51.222955179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-995586667-ttmxg,Uid:28a1d6ff-e4a7-417e-8d50-93082a2b90ea,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:51.251082 kubelet[2884]: I1212 19:42:51.250849 2884 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54d3a6c-07c3-4ee2-a301-dcc61165df66" path="/var/lib/kubelet/pods/a54d3a6c-07c3-4ee2-a301-dcc61165df66/volumes" Dec 12 19:42:51.764555 systemd-networkd[1501]: cali7ade669371f: Link UP Dec 12 19:42:51.768357 systemd-networkd[1501]: cali7ade669371f: Gained carrier Dec 12 19:42:51.817754 containerd[1566]: 2025-12-12 19:42:51.301 [INFO][3936] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 19:42:51.817754 containerd[1566]: 2025-12-12 19:42:51.405 [INFO][3936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0 whisker-995586667- calico-system 28a1d6ff-e4a7-417e-8d50-93082a2b90ea 921 0 2025-12-12 19:42:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:995586667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com whisker-995586667-ttmxg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7ade669371f [] [] }} ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-" Dec 12 19:42:51.817754 containerd[1566]: 2025-12-12 19:42:51.406 [INFO][3936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.817754 containerd[1566]: 2025-12-12 19:42:51.637 [INFO][3949] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" HandleID="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Workload="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.640 [INFO][3949] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" HandleID="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Workload="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000343b30), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-tupcq.gb1.brightbox.com", "pod":"whisker-995586667-ttmxg", "timestamp":"2025-12-12 19:42:51.637011601 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.640 [INFO][3949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.641 [INFO][3949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.642 [INFO][3949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.661 [INFO][3949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.675 [INFO][3949] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.684 [INFO][3949] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.688 [INFO][3949] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819249 containerd[1566]: 2025-12-12 19:42:51.695 [INFO][3949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.695 [INFO][3949] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.699 [INFO][3949] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627 Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.709 [INFO][3949] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.728 [INFO][3949] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.193/26] block=192.168.91.192/26 handle="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.728 [INFO][3949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.193/26] handle="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.728 [INFO][3949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:51.819652 containerd[1566]: 2025-12-12 19:42:51.729 [INFO][3949] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.193/26] IPv6=[] ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" HandleID="k8s-pod-network.c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Workload="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.821467 containerd[1566]: 2025-12-12 19:42:51.737 [INFO][3936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0", GenerateName:"whisker-995586667-", Namespace:"calico-system", SelfLink:"", UID:"28a1d6ff-e4a7-417e-8d50-93082a2b90ea", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"995586667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"whisker-995586667-ttmxg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ade669371f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:51.821467 containerd[1566]: 2025-12-12 19:42:51.738 [INFO][3936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.193/32] ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.822893 containerd[1566]: 2025-12-12 19:42:51.738 [INFO][3936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ade669371f ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.822893 containerd[1566]: 2025-12-12 19:42:51.776 [INFO][3936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.823018 containerd[1566]: 2025-12-12 19:42:51.776 [INFO][3936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0", GenerateName:"whisker-995586667-", Namespace:"calico-system", SelfLink:"", UID:"28a1d6ff-e4a7-417e-8d50-93082a2b90ea", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"995586667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627", Pod:"whisker-995586667-ttmxg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ade669371f", MAC:"92:84:a0:69:57:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:51.823148 containerd[1566]: 2025-12-12 19:42:51.809 [INFO][3936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" Namespace="calico-system" Pod="whisker-995586667-ttmxg" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-whisker--995586667--ttmxg-eth0" Dec 12 19:42:51.982541 containerd[1566]: time="2025-12-12T19:42:51.982399893Z" level=info msg="connecting to shim c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627" address="unix:///run/containerd/s/54164302a1a202b9fd5aeefdbd54645b5a258afbd923a6736a340b0f5e7b770f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:52.045817 systemd[1]: Started cri-containerd-c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627.scope - libcontainer container c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627. Dec 12 19:42:52.148025 containerd[1566]: time="2025-12-12T19:42:52.147953084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-995586667-ttmxg,Uid:28a1d6ff-e4a7-417e-8d50-93082a2b90ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"c95fb0a9e0c11f43d92c504c59961a387aa7250c40003e0b7fc374d15ecc5627\"" Dec 12 19:42:52.152172 containerd[1566]: time="2025-12-12T19:42:52.152129202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:42:52.237646 containerd[1566]: time="2025-12-12T19:42:52.237562769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-rsckc,Uid:cb91d52e-eadb-42e8-8836-c86b003fbe7b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:42:52.525288 containerd[1566]: time="2025-12-12T19:42:52.525137561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:52.527258 containerd[1566]: time="2025-12-12T19:42:52.527201749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:42:52.527408 containerd[1566]: time="2025-12-12T19:42:52.527370036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 19:42:52.530513 kubelet[2884]: E1212 19:42:52.530310 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:42:52.532145 kubelet[2884]: E1212 19:42:52.530746 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:42:52.554619 kubelet[2884]: E1212 19:42:52.554126 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e818186c85fb4d2e91b6612b2aa24cc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:52.561543 systemd-networkd[1501]: calie9d5d1d36f2: Link UP Dec 12 19:42:52.569056 systemd-networkd[1501]: calie9d5d1d36f2: Gained carrier Dec 12 19:42:52.573459 containerd[1566]: time="2025-12-12T19:42:52.570424565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:42:52.618311 containerd[1566]: 2025-12-12 19:42:52.348 [INFO][4064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 19:42:52.618311 containerd[1566]: 2025-12-12 19:42:52.372 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0 calico-apiserver-76c8ff9cd8- calico-apiserver cb91d52e-eadb-42e8-8836-c86b003fbe7b 850 0 2025-12-12 19:42:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c8ff9cd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com calico-apiserver-76c8ff9cd8-rsckc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie9d5d1d36f2 [] [] }} ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-" Dec 12 19:42:52.618311 containerd[1566]: 2025-12-12 19:42:52.373 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.618311 containerd[1566]: 2025-12-12 19:42:52.459 [INFO][4116] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" HandleID="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.461 [INFO][4116] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" HandleID="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-tupcq.gb1.brightbox.com", "pod":"calico-apiserver-76c8ff9cd8-rsckc", "timestamp":"2025-12-12 19:42:52.459063238 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.462 [INFO][4116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.462 [INFO][4116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.462 [INFO][4116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.477 [INFO][4116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.488 [INFO][4116] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.498 [INFO][4116] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.502 [INFO][4116] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619002 containerd[1566]: 2025-12-12 19:42:52.505 [INFO][4116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.505 [INFO][4116] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.509 [INFO][4116] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5 Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.514 [INFO][4116] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.524 [INFO][4116] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.194/26] block=192.168.91.192/26 handle="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.524 [INFO][4116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.194/26] handle="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.525 [INFO][4116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:52.619782 containerd[1566]: 2025-12-12 19:42:52.525 [INFO][4116] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.194/26] IPv6=[] ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" HandleID="k8s-pod-network.02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.622512 containerd[1566]: 2025-12-12 19:42:52.540 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0", GenerateName:"calico-apiserver-76c8ff9cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb91d52e-eadb-42e8-8836-c86b003fbe7b", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c8ff9cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-76c8ff9cd8-rsckc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9d5d1d36f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:52.622640 containerd[1566]: 2025-12-12 19:42:52.541 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.194/32] ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.622640 containerd[1566]: 2025-12-12 19:42:52.541 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9d5d1d36f2 ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.622640 containerd[1566]: 2025-12-12 19:42:52.575 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.622809 containerd[1566]: 2025-12-12 19:42:52.578 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0", GenerateName:"calico-apiserver-76c8ff9cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb91d52e-eadb-42e8-8836-c86b003fbe7b", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c8ff9cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5", Pod:"calico-apiserver-76c8ff9cd8-rsckc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9d5d1d36f2", MAC:"8e:af:c1:b4:df:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:52.622925 containerd[1566]: 2025-12-12 19:42:52.602 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-rsckc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--rsckc-eth0" Dec 12 19:42:52.684629 containerd[1566]: time="2025-12-12T19:42:52.684427877Z" level=info msg="connecting to shim 02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5" address="unix:///run/containerd/s/99c6db6a93f3e9bc9afe569d74d2647df7aeea9f0923fca523fcb96004b4a4cc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:52.760676 systemd[1]: Started cri-containerd-02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5.scope - libcontainer container 02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5. Dec 12 19:42:52.892104 containerd[1566]: time="2025-12-12T19:42:52.892026339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:52.902038 containerd[1566]: time="2025-12-12T19:42:52.901918900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:42:52.902038 containerd[1566]: time="2025-12-12T19:42:52.901981275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 19:42:52.903335 kubelet[2884]: E1212 19:42:52.903272 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:42:52.903484 kubelet[2884]: E1212 19:42:52.903357 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:42:52.905897 kubelet[2884]: E1212 19:42:52.904780 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:52.907451 kubelet[2884]: E1212 19:42:52.907343 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:42:52.923198 containerd[1566]: time="2025-12-12T19:42:52.922704391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-rsckc,Uid:cb91d52e-eadb-42e8-8836-c86b003fbe7b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"02e9fbc6d4a12ba4d8562a9cdeb31db0043a737a2e617432da0ec8b4ce916ae5\"" Dec 12 19:42:52.928135 containerd[1566]: time="2025-12-12T19:42:52.928072674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:42:53.238073 containerd[1566]: time="2025-12-12T19:42:53.237624377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:53.242727 containerd[1566]: time="2025-12-12T19:42:53.240009891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kk6xs,Uid:61f7b8fe-bbd8-4326-bc5c-e785765bcc23,Namespace:kube-system,Attempt:0,}" Dec 12 19:42:53.242727 containerd[1566]: time="2025-12-12T19:42:53.240488466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:42:53.242727 containerd[1566]: time="2025-12-12T19:42:53.241587807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:42:53.243579 kubelet[2884]: E1212 19:42:53.241801 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:42:53.243579 kubelet[2884]: E1212 19:42:53.242070 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:42:53.243579 kubelet[2884]: E1212 19:42:53.242888 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr58l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-rsckc_calico-apiserver(cb91d52e-eadb-42e8-8836-c86b003fbe7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:53.243833 containerd[1566]: time="2025-12-12T19:42:53.242726359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6dtpp,Uid:9e1f5ae0-5750-4ed0-9230-cd71bbf186d8,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:53.244404 kubelet[2884]: E1212 19:42:53.244081 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:42:53.544364 systemd-networkd[1501]: calida02abfbcd4: Link UP Dec 12 19:42:53.548507 systemd-networkd[1501]: calida02abfbcd4: Gained carrier Dec 12 19:42:53.559802 systemd-networkd[1501]: cali7ade669371f: Gained IPv6LL Dec 12 19:42:53.606306 containerd[1566]: 2025-12-12 19:42:53.369 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0 coredns-668d6bf9bc- kube-system 61f7b8fe-bbd8-4326-bc5c-e785765bcc23 838 0 2025-12-12 19:42:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com coredns-668d6bf9bc-kk6xs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida02abfbcd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-" Dec 12 19:42:53.606306 containerd[1566]: 2025-12-12 19:42:53.369 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.606306 containerd[1566]: 2025-12-12 19:42:53.443 [INFO][4254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" HandleID="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Workload="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.444 [INFO][4254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" HandleID="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Workload="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5550), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-tupcq.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-kk6xs", "timestamp":"2025-12-12 19:42:53.443853155 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.444 [INFO][4254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.444 [INFO][4254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.444 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.464 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.474 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.496 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.500 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.606794 containerd[1566]: 2025-12-12 19:42:53.506 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.507 [INFO][4254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.510 [INFO][4254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.517 [INFO][4254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.527 [INFO][4254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.195/26] block=192.168.91.192/26 handle="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.527 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.195/26] handle="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.528 [INFO][4254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:53.607303 containerd[1566]: 2025-12-12 19:42:53.528 [INFO][4254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.195/26] IPv6=[] ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" HandleID="k8s-pod-network.b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Workload="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.607585 containerd[1566]: 2025-12-12 19:42:53.534 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"61f7b8fe-bbd8-4326-bc5c-e785765bcc23", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-kk6xs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida02abfbcd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:53.607585 containerd[1566]: 2025-12-12 19:42:53.535 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.195/32] ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.607585 containerd[1566]: 2025-12-12 19:42:53.535 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida02abfbcd4 ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.607585 containerd[1566]: 2025-12-12 19:42:53.548 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.607585 containerd[1566]: 2025-12-12 19:42:53.554 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"61f7b8fe-bbd8-4326-bc5c-e785765bcc23", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb", Pod:"coredns-668d6bf9bc-kk6xs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida02abfbcd4", MAC:"86:7a:5c:43:36:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:53.607585 containerd[1566]: 2025-12-12 19:42:53.597 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" Namespace="kube-system" Pod="coredns-668d6bf9bc-kk6xs" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kk6xs-eth0" Dec 12 19:42:53.666295 containerd[1566]: time="2025-12-12T19:42:53.666215079Z" level=info msg="connecting to shim b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb" address="unix:///run/containerd/s/2ebfb735b5a7576b0860fa38f7eddcb66a66e5e27c5a6dfae68c2f659b77c209" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:53.677678 systemd-networkd[1501]: cali7bcb1bc44a4: Link UP Dec 12 19:42:53.682933 systemd-networkd[1501]: cali7bcb1bc44a4: Gained carrier Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.410 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0 goldmane-666569f655- calico-system 9e1f5ae0-5750-4ed0-9230-cd71bbf186d8 851 0 2025-12-12 19:42:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com goldmane-666569f655-6dtpp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7bcb1bc44a4 [] [] }} ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.414 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.512 [INFO][4261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" HandleID="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Workload="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.514 [INFO][4261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" HandleID="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Workload="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-tupcq.gb1.brightbox.com", "pod":"goldmane-666569f655-6dtpp", "timestamp":"2025-12-12 19:42:53.512628681 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.514 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.528 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.529 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.566 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.593 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.610 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.614 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.619 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.619 [INFO][4261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.622 [INFO][4261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12 Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.638 [INFO][4261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.656 [INFO][4261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.196/26] block=192.168.91.192/26 handle="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.656 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.196/26] handle="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.659 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:53.734729 containerd[1566]: 2025-12-12 19:42:53.659 [INFO][4261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.196/26] IPv6=[] ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" HandleID="k8s-pod-network.0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Workload="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.736540 containerd[1566]: 2025-12-12 19:42:53.668 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9e1f5ae0-5750-4ed0-9230-cd71bbf186d8", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-6dtpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bcb1bc44a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:53.736540 containerd[1566]: 2025-12-12 19:42:53.668 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.196/32] ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.736540 containerd[1566]: 2025-12-12 19:42:53.668 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bcb1bc44a4 ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.736540 containerd[1566]: 2025-12-12 19:42:53.684 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.736540 containerd[1566]: 2025-12-12 19:42:53.687 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9e1f5ae0-5750-4ed0-9230-cd71bbf186d8", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12", Pod:"goldmane-666569f655-6dtpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bcb1bc44a4", MAC:"62:24:10:8e:c7:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:53.736540 containerd[1566]: 2025-12-12 19:42:53.725 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" Namespace="calico-system" Pod="goldmane-666569f655-6dtpp" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-goldmane--666569f655--6dtpp-eth0" Dec 12 19:42:53.750449 systemd[1]: Started cri-containerd-b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb.scope - libcontainer container b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb. Dec 12 19:42:53.756793 kubelet[2884]: E1212 19:42:53.756610 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:42:53.757683 kubelet[2884]: E1212 19:42:53.757016 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:42:53.827835 containerd[1566]: time="2025-12-12T19:42:53.827654141Z" level=info msg="connecting to shim 0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12" address="unix:///run/containerd/s/3942ff167d59e4399e2b6d4aff4bfb84c8be6c151e7345a73c2484bd6c18c257" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:53.898375 systemd[1]: Started cri-containerd-0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12.scope - libcontainer container 0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12. Dec 12 19:42:54.016573 containerd[1566]: time="2025-12-12T19:42:54.016502197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kk6xs,Uid:61f7b8fe-bbd8-4326-bc5c-e785765bcc23,Namespace:kube-system,Attempt:0,} returns sandbox id \"b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb\"" Dec 12 19:42:54.024490 containerd[1566]: time="2025-12-12T19:42:54.024340969Z" level=info msg="CreateContainer within sandbox \"b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 19:42:54.068059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884573055.mount: Deactivated successfully. Dec 12 19:42:54.075719 containerd[1566]: time="2025-12-12T19:42:54.074872304Z" level=info msg="Container 3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:54.086957 containerd[1566]: time="2025-12-12T19:42:54.086787267Z" level=info msg="CreateContainer within sandbox \"b70a551e18f53589329b1ed6dc75fba3ecbddd6cf7b750e3772f35e263ce39eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287\"" Dec 12 19:42:54.089207 containerd[1566]: time="2025-12-12T19:42:54.089173971Z" level=info msg="StartContainer for \"3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287\"" Dec 12 19:42:54.092946 containerd[1566]: time="2025-12-12T19:42:54.092871374Z" level=info msg="connecting to shim 3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287" address="unix:///run/containerd/s/2ebfb735b5a7576b0860fa38f7eddcb66a66e5e27c5a6dfae68c2f659b77c209" protocol=ttrpc version=3 Dec 12 19:42:54.148343 systemd[1]: Started cri-containerd-3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287.scope - libcontainer container 3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287. Dec 12 19:42:54.236465 containerd[1566]: time="2025-12-12T19:42:54.236216981Z" level=info msg="StartContainer for \"3b5a8ac322583c61b95004a146630298ca9062c7e01af9710c10bb1c09ecd287\" returns successfully" Dec 12 19:42:54.241430 containerd[1566]: time="2025-12-12T19:42:54.241385628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v968r,Uid:9bfee0fd-b637-401b-8c2c-b95c13a62022,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:54.245797 containerd[1566]: time="2025-12-12T19:42:54.245306092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-ml9g6,Uid:fa81211e-8b3a-4af8-b6e2-d28a7d96f939,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:42:54.249798 containerd[1566]: time="2025-12-12T19:42:54.248867686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56475989c-wt7ld,Uid:95916008-465f-4755-98cd-82437c8d75be,Namespace:calico-system,Attempt:0,}" Dec 12 19:42:54.254339 systemd-networkd[1501]: calie9d5d1d36f2: Gained IPv6LL Dec 12 19:42:54.287321 containerd[1566]: time="2025-12-12T19:42:54.287254005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6dtpp,Uid:9e1f5ae0-5750-4ed0-9230-cd71bbf186d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c3c4c9c32238c7bd9e8b7345d6e0405857577e842e7ea0d114f26c538792a12\"" Dec 12 19:42:54.296070 containerd[1566]: time="2025-12-12T19:42:54.296019204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:42:54.632211 containerd[1566]: time="2025-12-12T19:42:54.632122345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:54.635045 containerd[1566]: time="2025-12-12T19:42:54.634637221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:42:54.635696 containerd[1566]: time="2025-12-12T19:42:54.635319190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 19:42:54.639032 systemd-networkd[1501]: calida02abfbcd4: Gained IPv6LL Dec 12 19:42:54.656050 kubelet[2884]: E1212 19:42:54.635820 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:42:54.656050 kubelet[2884]: E1212 19:42:54.655761 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:42:54.666473 kubelet[2884]: E1212 19:42:54.666352 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmdbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6dtpp_calico-system(9e1f5ae0-5750-4ed0-9230-cd71bbf186d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:54.685008 kubelet[2884]: E1212 19:42:54.684593 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:42:54.832730 systemd-networkd[1501]: calid5d1e72d9a9: Link UP Dec 12 19:42:54.843851 systemd-networkd[1501]: vxlan.calico: Link UP Dec 12 19:42:54.843863 systemd-networkd[1501]: vxlan.calico: Gained carrier Dec 12 19:42:54.857492 systemd-networkd[1501]: calid5d1e72d9a9: Gained carrier Dec 12 19:42:54.863177 kubelet[2884]: E1212 19:42:54.860640 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:42:54.863177 kubelet[2884]: E1212 19:42:54.860927 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.418 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0 csi-node-driver- calico-system 9bfee0fd-b637-401b-8c2c-b95c13a62022 728 0 2025-12-12 19:42:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com csi-node-driver-v968r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid5d1e72d9a9 [] [] }} ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.419 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.615 [INFO][4461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" HandleID="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Workload="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.619 [INFO][4461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" HandleID="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Workload="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003270c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-tupcq.gb1.brightbox.com", "pod":"csi-node-driver-v968r", "timestamp":"2025-12-12 19:42:54.615837229 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.619 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.619 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.620 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.680 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.705 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.735 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.739 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.743 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.743 [INFO][4461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.746 [INFO][4461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.751 [INFO][4461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.765 [INFO][4461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.197/26] block=192.168.91.192/26 handle="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.766 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.197/26] handle="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.769 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:54.906494 containerd[1566]: 2025-12-12 19:42:54.769 [INFO][4461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.197/26] IPv6=[] ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" HandleID="k8s-pod-network.050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Workload="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.911228 containerd[1566]: 2025-12-12 19:42:54.804 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9bfee0fd-b637-401b-8c2c-b95c13a62022", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-v968r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5d1e72d9a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:54.911228 containerd[1566]: 2025-12-12 19:42:54.805 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.197/32] ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.911228 containerd[1566]: 2025-12-12 19:42:54.805 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5d1e72d9a9 ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.911228 containerd[1566]: 2025-12-12 19:42:54.836 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.911228 containerd[1566]: 2025-12-12 19:42:54.845 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9bfee0fd-b637-401b-8c2c-b95c13a62022", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b", Pod:"csi-node-driver-v968r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5d1e72d9a9", MAC:"a2:c6:c0:03:4a:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:54.911228 containerd[1566]: 2025-12-12 19:42:54.899 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" Namespace="calico-system" Pod="csi-node-driver-v968r" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-csi--node--driver--v968r-eth0" Dec 12 19:42:54.933201 kubelet[2884]: I1212 19:42:54.930912 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kk6xs" podStartSLOduration=48.930876606 podStartE2EDuration="48.930876606s" podCreationTimestamp="2025-12-12 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:42:54.929322975 +0000 UTC m=+53.995451063" watchObservedRunningTime="2025-12-12 19:42:54.930876606 +0000 UTC m=+53.997004697" Dec 12 19:42:54.985116 containerd[1566]: time="2025-12-12T19:42:54.985020248Z" level=info msg="connecting to shim 050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b" address="unix:///run/containerd/s/d8d2a56b90fce63fceb41f0d581567662d86ec4dadc911fce98ec40b8cfcebfe" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:55.081373 systemd[1]: Started cri-containerd-050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b.scope - libcontainer container 050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b. Dec 12 19:42:55.083709 systemd-networkd[1501]: cali472ad8d667d: Link UP Dec 12 19:42:55.085480 systemd-networkd[1501]: cali472ad8d667d: Gained carrier Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.505 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0 calico-kube-controllers-56475989c- calico-system 95916008-465f-4755-98cd-82437c8d75be 843 0 2025-12-12 19:42:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56475989c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com calico-kube-controllers-56475989c-wt7ld eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali472ad8d667d [] [] }} ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.505 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.683 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" HandleID="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.685 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" HandleID="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000636000), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-tupcq.gb1.brightbox.com", "pod":"calico-kube-controllers-56475989c-wt7ld", "timestamp":"2025-12-12 19:42:54.683906014 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.685 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.768 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.776 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.851 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.914 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.940 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.950 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.963 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.963 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:54.979 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91 Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:55.017 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:55.040 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.198/26] block=192.168.91.192/26 handle="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:55.040 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.198/26] handle="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:55.040 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:55.130531 containerd[1566]: 2025-12-12 19:42:55.040 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.198/26] IPv6=[] ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" HandleID="k8s-pod-network.14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.134299 containerd[1566]: 2025-12-12 19:42:55.054 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0", GenerateName:"calico-kube-controllers-56475989c-", Namespace:"calico-system", SelfLink:"", UID:"95916008-465f-4755-98cd-82437c8d75be", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56475989c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-56475989c-wt7ld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali472ad8d667d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:55.134299 containerd[1566]: 2025-12-12 19:42:55.055 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.198/32] ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.134299 containerd[1566]: 2025-12-12 19:42:55.055 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali472ad8d667d ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.134299 containerd[1566]: 2025-12-12 19:42:55.084 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.134299 containerd[1566]: 2025-12-12 19:42:55.091 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0", GenerateName:"calico-kube-controllers-56475989c-", Namespace:"calico-system", SelfLink:"", UID:"95916008-465f-4755-98cd-82437c8d75be", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56475989c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91", Pod:"calico-kube-controllers-56475989c-wt7ld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali472ad8d667d", MAC:"2a:33:09:a8:5e:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:55.134299 containerd[1566]: 2025-12-12 19:42:55.123 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" Namespace="calico-system" Pod="calico-kube-controllers-56475989c-wt7ld" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--kube--controllers--56475989c--wt7ld-eth0" Dec 12 19:42:55.198892 containerd[1566]: time="2025-12-12T19:42:55.198579564Z" level=info msg="connecting to shim 14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91" address="unix:///run/containerd/s/2cebfa5f81ecac87a114056886a614cc884fceb0aa95f28399e26507da79ae5f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:55.237655 systemd-networkd[1501]: cali1b7451e7cd6: Link UP Dec 12 19:42:55.246761 systemd-networkd[1501]: cali1b7451e7cd6: Gained carrier Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:54.516 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0 calico-apiserver-76c8ff9cd8- calico-apiserver fa81211e-8b3a-4af8-b6e2-d28a7d96f939 847 0 2025-12-12 19:42:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c8ff9cd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com calico-apiserver-76c8ff9cd8-ml9g6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b7451e7cd6 [] [] }} ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:54.518 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:54.698 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" HandleID="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:54.698 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" HandleID="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-tupcq.gb1.brightbox.com", "pod":"calico-apiserver-76c8ff9cd8-ml9g6", "timestamp":"2025-12-12 19:42:54.698392458 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:54.709 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.040 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.040 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.096 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.126 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.140 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.144 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.156 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.156 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.170 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.180 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.201 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.199/26] block=192.168.91.192/26 handle="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.202 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.199/26] handle="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.203 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:55.298289 containerd[1566]: 2025-12-12 19:42:55.203 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.199/26] IPv6=[] ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" HandleID="k8s-pod-network.bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Workload="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.306223 containerd[1566]: 2025-12-12 19:42:55.213 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0", GenerateName:"calico-apiserver-76c8ff9cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa81211e-8b3a-4af8-b6e2-d28a7d96f939", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c8ff9cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-76c8ff9cd8-ml9g6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b7451e7cd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:55.306223 containerd[1566]: 2025-12-12 19:42:55.214 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.199/32] ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.306223 containerd[1566]: 2025-12-12 19:42:55.214 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b7451e7cd6 ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.306223 containerd[1566]: 2025-12-12 19:42:55.249 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.306223 containerd[1566]: 2025-12-12 19:42:55.253 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0", GenerateName:"calico-apiserver-76c8ff9cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa81211e-8b3a-4af8-b6e2-d28a7d96f939", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c8ff9cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b", Pod:"calico-apiserver-76c8ff9cd8-ml9g6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b7451e7cd6", MAC:"76:f5:1b:7f:3e:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:55.306223 containerd[1566]: 2025-12-12 19:42:55.284 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" Namespace="calico-apiserver" Pod="calico-apiserver-76c8ff9cd8-ml9g6" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-calico--apiserver--76c8ff9cd8--ml9g6-eth0" Dec 12 19:42:55.300663 systemd[1]: Started cri-containerd-14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91.scope - libcontainer container 14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91. Dec 12 19:42:55.379452 containerd[1566]: time="2025-12-12T19:42:55.379163442Z" level=info msg="connecting to shim bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b" address="unix:///run/containerd/s/d9973f83a512900e4448354d7fd5f91107a268f2561c0c0c957d25e69919d728" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:55.506181 containerd[1566]: time="2025-12-12T19:42:55.505960343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v968r,Uid:9bfee0fd-b637-401b-8c2c-b95c13a62022,Namespace:calico-system,Attempt:0,} returns sandbox id \"050feb0993302089d29fd58070d3c7a2cf99819983caad313c531f273333c43b\"" Dec 12 19:42:55.512355 containerd[1566]: time="2025-12-12T19:42:55.512225787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:42:55.535342 systemd-networkd[1501]: cali7bcb1bc44a4: Gained IPv6LL Dec 12 19:42:55.541418 systemd[1]: Started cri-containerd-bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b.scope - libcontainer container bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b. Dec 12 19:42:55.678728 containerd[1566]: time="2025-12-12T19:42:55.678655878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56475989c-wt7ld,Uid:95916008-465f-4755-98cd-82437c8d75be,Namespace:calico-system,Attempt:0,} returns sandbox id \"14d5344b14d1ee08c767dcd5e2241242af4f2e799029ab2506948c1c401b8b91\"" Dec 12 19:42:55.757763 containerd[1566]: time="2025-12-12T19:42:55.757603713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c8ff9cd8-ml9g6,Uid:fa81211e-8b3a-4af8-b6e2-d28a7d96f939,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bb01bda298d03e2ca0a6c94f095c0bcb621cccacd476832ab0f25e81b9df640b\"" Dec 12 19:42:55.831599 containerd[1566]: time="2025-12-12T19:42:55.831117060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:55.833637 containerd[1566]: time="2025-12-12T19:42:55.833496306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:42:55.833637 containerd[1566]: time="2025-12-12T19:42:55.833595333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 19:42:55.837429 kubelet[2884]: E1212 19:42:55.836418 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:42:55.837429 kubelet[2884]: E1212 19:42:55.836477 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:42:55.837429 kubelet[2884]: E1212 19:42:55.836803 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:55.837762 containerd[1566]: time="2025-12-12T19:42:55.837170395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:42:55.853818 kubelet[2884]: E1212 19:42:55.853767 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:42:56.166889 containerd[1566]: time="2025-12-12T19:42:56.166718619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:56.168763 containerd[1566]: time="2025-12-12T19:42:56.168693680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:42:56.169168 containerd[1566]: time="2025-12-12T19:42:56.168866602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 19:42:56.169625 kubelet[2884]: E1212 19:42:56.169560 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:42:56.170563 kubelet[2884]: E1212 19:42:56.169653 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:42:56.170911 containerd[1566]: time="2025-12-12T19:42:56.170854115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:42:56.172013 kubelet[2884]: E1212 19:42:56.170031 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2msb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56475989c-wt7ld_calico-system(95916008-465f-4755-98cd-82437c8d75be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:56.173925 kubelet[2884]: E1212 19:42:56.173867 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:42:56.367282 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL Dec 12 19:42:56.484673 containerd[1566]: time="2025-12-12T19:42:56.484287484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:56.486139 containerd[1566]: time="2025-12-12T19:42:56.485990950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:42:56.486289 containerd[1566]: time="2025-12-12T19:42:56.486259315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:42:56.486692 kubelet[2884]: E1212 19:42:56.486608 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:42:56.486770 kubelet[2884]: E1212 19:42:56.486702 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:42:56.487178 kubelet[2884]: E1212 19:42:56.487070 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhfq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-ml9g6_calico-apiserver(fa81211e-8b3a-4af8-b6e2-d28a7d96f939): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:56.487747 containerd[1566]: time="2025-12-12T19:42:56.487705363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:42:56.488883 kubelet[2884]: E1212 19:42:56.488806 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:42:56.494260 systemd-networkd[1501]: cali472ad8d667d: Gained IPv6LL Dec 12 19:42:56.622304 systemd-networkd[1501]: calid5d1e72d9a9: Gained IPv6LL Dec 12 19:42:56.793584 containerd[1566]: time="2025-12-12T19:42:56.793346529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:42:56.795623 containerd[1566]: time="2025-12-12T19:42:56.795493079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:42:56.795623 containerd[1566]: time="2025-12-12T19:42:56.795576276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 19:42:56.796575 kubelet[2884]: E1212 19:42:56.795795 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:42:56.796575 kubelet[2884]: E1212 19:42:56.795863 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:42:56.796575 kubelet[2884]: E1212 19:42:56.796049 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:42:56.798400 kubelet[2884]: E1212 19:42:56.797789 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:56.858563 kubelet[2884]: E1212 19:42:56.858435 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:42:56.859343 kubelet[2884]: E1212 19:42:56.859121 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:42:56.859343 kubelet[2884]: E1212 19:42:56.859289 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:42:56.878315 systemd-networkd[1501]: cali1b7451e7cd6: Gained IPv6LL Dec 12 19:42:57.239059 containerd[1566]: time="2025-12-12T19:42:57.238935661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t95fc,Uid:d836babe-8f7d-4346-8873-331eb853865c,Namespace:kube-system,Attempt:0,}" Dec 12 19:42:57.422781 systemd-networkd[1501]: cali5ff31da95e7: Link UP Dec 12 19:42:57.424649 systemd-networkd[1501]: cali5ff31da95e7: Gained carrier Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.311 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0 coredns-668d6bf9bc- kube-system d836babe-8f7d-4346-8873-331eb853865c 848 0 2025-12-12 19:42:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-tupcq.gb1.brightbox.com coredns-668d6bf9bc-t95fc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ff31da95e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.312 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.358 [INFO][4732] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" HandleID="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Workload="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.359 [INFO][4732] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" HandleID="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Workload="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-tupcq.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-t95fc", "timestamp":"2025-12-12 19:42:57.358920301 +0000 UTC"}, Hostname:"srv-tupcq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.359 [INFO][4732] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.359 [INFO][4732] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.359 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-tupcq.gb1.brightbox.com' Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.370 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.379 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.388 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.391 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.394 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.192/26 host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.395 [INFO][4732] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.192/26 handle="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.397 [INFO][4732] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6 Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.403 [INFO][4732] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.192/26 handle="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.411 [INFO][4732] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.200/26] block=192.168.91.192/26 handle="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.411 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.200/26] handle="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" host="srv-tupcq.gb1.brightbox.com" Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.411 [INFO][4732] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:42:57.453275 containerd[1566]: 2025-12-12 19:42:57.411 [INFO][4732] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.200/26] IPv6=[] ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" HandleID="k8s-pod-network.38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Workload="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.460045 containerd[1566]: 2025-12-12 19:42:57.415 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d836babe-8f7d-4346-8873-331eb853865c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-t95fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ff31da95e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:57.460045 containerd[1566]: 2025-12-12 19:42:57.416 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.200/32] ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.460045 containerd[1566]: 2025-12-12 19:42:57.416 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ff31da95e7 ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.460045 containerd[1566]: 2025-12-12 19:42:57.425 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.460045 containerd[1566]: 2025-12-12 19:42:57.426 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d836babe-8f7d-4346-8873-331eb853865c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-tupcq.gb1.brightbox.com", ContainerID:"38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6", Pod:"coredns-668d6bf9bc-t95fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ff31da95e7", MAC:"fa:94:73:5a:4d:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:42:57.460045 containerd[1566]: 2025-12-12 19:42:57.444 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-t95fc" WorkloadEndpoint="srv--tupcq.gb1.brightbox.com-k8s-coredns--668d6bf9bc--t95fc-eth0" Dec 12 19:42:57.522470 containerd[1566]: time="2025-12-12T19:42:57.522292011Z" level=info msg="connecting to shim 38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6" address="unix:///run/containerd/s/af441356d215b22f319511edb25d647c0fba4d02078295b5f124ef4d9d4fbce3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:42:57.598420 systemd[1]: Started cri-containerd-38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6.scope - libcontainer container 38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6. Dec 12 19:42:57.671552 containerd[1566]: time="2025-12-12T19:42:57.671480719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t95fc,Uid:d836babe-8f7d-4346-8873-331eb853865c,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6\"" Dec 12 19:42:57.677497 containerd[1566]: time="2025-12-12T19:42:57.677435608Z" level=info msg="CreateContainer within sandbox \"38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 19:42:57.701660 containerd[1566]: time="2025-12-12T19:42:57.700877306Z" level=info msg="Container 3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:42:57.711581 containerd[1566]: time="2025-12-12T19:42:57.711518873Z" level=info msg="CreateContainer within sandbox \"38e610794fbb8773fe5d26748935fa71a57ab7960f080b2a26ed2f22a38979a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab\"" Dec 12 19:42:57.713345 containerd[1566]: time="2025-12-12T19:42:57.713289995Z" level=info msg="StartContainer for \"3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab\"" Dec 12 19:42:57.716037 containerd[1566]: time="2025-12-12T19:42:57.715898941Z" level=info msg="connecting to shim 3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab" address="unix:///run/containerd/s/af441356d215b22f319511edb25d647c0fba4d02078295b5f124ef4d9d4fbce3" protocol=ttrpc version=3 Dec 12 19:42:57.743351 systemd[1]: Started cri-containerd-3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab.scope - libcontainer container 3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab. Dec 12 19:42:57.795767 containerd[1566]: time="2025-12-12T19:42:57.795615249Z" level=info msg="StartContainer for \"3e47730936002a03309fbe1e64b68254aa0b08c2f9978e63b135f073accc7aab\" returns successfully" Dec 12 19:42:57.883790 kubelet[2884]: I1212 19:42:57.883708 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t95fc" podStartSLOduration=51.883682867 podStartE2EDuration="51.883682867s" podCreationTimestamp="2025-12-12 19:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:42:57.882603778 +0000 UTC m=+56.948731873" watchObservedRunningTime="2025-12-12 19:42:57.883682867 +0000 UTC m=+56.949810953" Dec 12 19:42:58.803461 systemd[1]: Started sshd@13-10.244.20.246:22-157.245.76.79:50600.service - OpenSSH per-connection server daemon (157.245.76.79:50600). Dec 12 19:42:58.981280 sshd[4833]: Invalid user webmaster from 157.245.76.79 port 50600 Dec 12 19:42:59.001593 sshd[4833]: Connection closed by invalid user webmaster 157.245.76.79 port 50600 [preauth] Dec 12 19:42:59.006469 systemd[1]: sshd@13-10.244.20.246:22-157.245.76.79:50600.service: Deactivated successfully. Dec 12 19:42:59.118403 systemd-networkd[1501]: cali5ff31da95e7: Gained IPv6LL Dec 12 19:43:06.240718 containerd[1566]: time="2025-12-12T19:43:06.240147225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:43:06.550733 containerd[1566]: time="2025-12-12T19:43:06.550246820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:06.553710 containerd[1566]: time="2025-12-12T19:43:06.553559775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:43:06.553710 containerd[1566]: time="2025-12-12T19:43:06.553650254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:43:06.553996 kubelet[2884]: E1212 19:43:06.553903 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:06.553996 kubelet[2884]: E1212 19:43:06.553974 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:06.556343 kubelet[2884]: E1212 19:43:06.554530 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr58l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-rsckc_calico-apiserver(cb91d52e-eadb-42e8-8836-c86b003fbe7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:06.556536 containerd[1566]: time="2025-12-12T19:43:06.555019402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:43:06.557073 kubelet[2884]: E1212 19:43:06.556686 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:43:06.862658 containerd[1566]: time="2025-12-12T19:43:06.861904110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:06.863350 containerd[1566]: time="2025-12-12T19:43:06.863297753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:43:06.863442 containerd[1566]: time="2025-12-12T19:43:06.863425777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 19:43:06.863721 kubelet[2884]: E1212 19:43:06.863651 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:43:06.863831 kubelet[2884]: E1212 19:43:06.863747 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:43:06.864005 kubelet[2884]: E1212 19:43:06.863941 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e818186c85fb4d2e91b6612b2aa24cc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:06.867916 containerd[1566]: time="2025-12-12T19:43:06.867741285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:43:07.183685 containerd[1566]: time="2025-12-12T19:43:07.183618888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:07.185102 containerd[1566]: time="2025-12-12T19:43:07.185012940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:43:07.185421 containerd[1566]: time="2025-12-12T19:43:07.185235871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 19:43:07.185738 kubelet[2884]: E1212 19:43:07.185667 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:43:07.186308 kubelet[2884]: E1212 19:43:07.185939 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:43:07.186467 kubelet[2884]: E1212 19:43:07.186189 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:07.188268 kubelet[2884]: E1212 19:43:07.187933 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:43:07.243995 containerd[1566]: time="2025-12-12T19:43:07.243680490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:43:07.566369 containerd[1566]: time="2025-12-12T19:43:07.564511499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:07.566960 containerd[1566]: time="2025-12-12T19:43:07.566910243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:43:07.568422 containerd[1566]: time="2025-12-12T19:43:07.568261538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 19:43:07.571785 kubelet[2884]: E1212 19:43:07.571708 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:43:07.572592 kubelet[2884]: E1212 19:43:07.572551 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:43:07.572787 kubelet[2884]: E1212 19:43:07.572725 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:07.576425 containerd[1566]: time="2025-12-12T19:43:07.576377413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:43:07.901106 containerd[1566]: time="2025-12-12T19:43:07.900962848Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:07.902584 containerd[1566]: time="2025-12-12T19:43:07.902479251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:43:07.903150 containerd[1566]: time="2025-12-12T19:43:07.902704431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 19:43:07.903250 kubelet[2884]: E1212 19:43:07.902956 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:43:07.903250 kubelet[2884]: E1212 19:43:07.903010 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:43:07.904266 kubelet[2884]: E1212 19:43:07.904116 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:07.905832 kubelet[2884]: E1212 19:43:07.905757 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:43:09.241244 containerd[1566]: time="2025-12-12T19:43:09.240482654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:43:09.545070 containerd[1566]: time="2025-12-12T19:43:09.544822172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:09.546579 containerd[1566]: time="2025-12-12T19:43:09.546529320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:43:09.546691 containerd[1566]: time="2025-12-12T19:43:09.546645767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 19:43:09.547043 kubelet[2884]: E1212 19:43:09.546947 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:43:09.547708 kubelet[2884]: E1212 19:43:09.547068 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:43:09.547708 kubelet[2884]: E1212 19:43:09.547334 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2msb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56475989c-wt7ld_calico-system(95916008-465f-4755-98cd-82437c8d75be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:09.549777 kubelet[2884]: E1212 19:43:09.549736 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:43:10.239076 containerd[1566]: time="2025-12-12T19:43:10.238977925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:43:10.545955 containerd[1566]: time="2025-12-12T19:43:10.545682004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:10.553106 containerd[1566]: time="2025-12-12T19:43:10.553024679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:43:10.553210 containerd[1566]: time="2025-12-12T19:43:10.553181187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 19:43:10.553547 kubelet[2884]: E1212 19:43:10.553463 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:43:10.554480 kubelet[2884]: E1212 19:43:10.553557 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:43:10.554480 kubelet[2884]: E1212 19:43:10.553753 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmdbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6dtpp_calico-system(9e1f5ae0-5750-4ed0-9230-cd71bbf186d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:10.554988 kubelet[2884]: E1212 19:43:10.554950 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:43:11.240034 containerd[1566]: time="2025-12-12T19:43:11.239756675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:43:11.548548 containerd[1566]: time="2025-12-12T19:43:11.548324982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:11.555292 containerd[1566]: time="2025-12-12T19:43:11.555178142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:43:11.555292 containerd[1566]: time="2025-12-12T19:43:11.555247051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:43:11.556530 kubelet[2884]: E1212 19:43:11.555564 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:11.556530 kubelet[2884]: E1212 19:43:11.555636 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:11.556530 kubelet[2884]: E1212 19:43:11.555827 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhfq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-ml9g6_calico-apiserver(fa81211e-8b3a-4af8-b6e2-d28a7d96f939): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:11.557626 kubelet[2884]: E1212 19:43:11.557333 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:43:18.240257 kubelet[2884]: E1212 19:43:18.239399 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:43:18.240925 kubelet[2884]: E1212 19:43:18.240394 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:43:20.239306 kubelet[2884]: E1212 19:43:20.239227 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:43:21.247339 kubelet[2884]: E1212 19:43:21.246948 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:43:22.239259 kubelet[2884]: E1212 19:43:22.239147 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:43:23.155656 systemd[1]: Started sshd@14-10.244.20.246:22-157.245.76.79:37508.service - OpenSSH per-connection server daemon (157.245.76.79:37508). Dec 12 19:43:23.305974 sshd[4899]: Invalid user webmaster from 157.245.76.79 port 37508 Dec 12 19:43:23.321503 sshd[4899]: Connection closed by invalid user webmaster 157.245.76.79 port 37508 [preauth] Dec 12 19:43:23.325247 systemd[1]: sshd@14-10.244.20.246:22-157.245.76.79:37508.service: Deactivated successfully. Dec 12 19:43:26.240352 kubelet[2884]: E1212 19:43:26.240197 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:43:29.243496 containerd[1566]: time="2025-12-12T19:43:29.243348875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:43:29.570587 containerd[1566]: time="2025-12-12T19:43:29.570184248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:29.572111 containerd[1566]: time="2025-12-12T19:43:29.571684666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:43:29.572111 containerd[1566]: time="2025-12-12T19:43:29.571828866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:43:29.573253 kubelet[2884]: E1212 19:43:29.572400 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:29.573253 kubelet[2884]: E1212 19:43:29.572486 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:29.573253 kubelet[2884]: E1212 19:43:29.572674 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr58l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-rsckc_calico-apiserver(cb91d52e-eadb-42e8-8836-c86b003fbe7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:29.576214 kubelet[2884]: E1212 19:43:29.573820 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:43:32.240794 containerd[1566]: time="2025-12-12T19:43:32.240344304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:43:32.548455 containerd[1566]: time="2025-12-12T19:43:32.547660893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:32.550022 containerd[1566]: time="2025-12-12T19:43:32.549896166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:43:32.550250 containerd[1566]: time="2025-12-12T19:43:32.550140155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 19:43:32.551152 kubelet[2884]: E1212 19:43:32.551064 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:43:32.551615 kubelet[2884]: E1212 19:43:32.551290 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:43:32.551903 kubelet[2884]: E1212 19:43:32.551553 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e818186c85fb4d2e91b6612b2aa24cc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:32.555761 containerd[1566]: time="2025-12-12T19:43:32.555593317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:43:32.865974 containerd[1566]: time="2025-12-12T19:43:32.865734563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:32.867161 containerd[1566]: time="2025-12-12T19:43:32.867065979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:43:32.867161 containerd[1566]: time="2025-12-12T19:43:32.867120979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 19:43:32.867669 kubelet[2884]: E1212 19:43:32.867450 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:43:32.867669 kubelet[2884]: E1212 19:43:32.867530 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:43:32.868251 kubelet[2884]: E1212 19:43:32.867704 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:32.869378 kubelet[2884]: E1212 19:43:32.869329 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:43:33.239503 containerd[1566]: time="2025-12-12T19:43:33.239339725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:43:33.549356 containerd[1566]: time="2025-12-12T19:43:33.548790573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:33.550403 containerd[1566]: time="2025-12-12T19:43:33.550147717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:43:33.550403 containerd[1566]: time="2025-12-12T19:43:33.550222971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 19:43:33.551170 kubelet[2884]: E1212 19:43:33.550785 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:43:33.551170 kubelet[2884]: E1212 19:43:33.550872 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:43:33.552308 kubelet[2884]: E1212 19:43:33.551646 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmdbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6dtpp_calico-system(9e1f5ae0-5750-4ed0-9230-cd71bbf186d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:33.553815 kubelet[2884]: E1212 19:43:33.553638 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:43:35.242369 containerd[1566]: time="2025-12-12T19:43:35.242203184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:43:35.563076 containerd[1566]: time="2025-12-12T19:43:35.562585928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:35.565592 containerd[1566]: time="2025-12-12T19:43:35.565443102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:43:35.565592 containerd[1566]: time="2025-12-12T19:43:35.565513973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 19:43:35.566100 kubelet[2884]: E1212 19:43:35.566007 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:43:35.567196 kubelet[2884]: E1212 19:43:35.566606 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:43:35.567196 kubelet[2884]: E1212 19:43:35.566901 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2msb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56475989c-wt7ld_calico-system(95916008-465f-4755-98cd-82437c8d75be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:35.568326 kubelet[2884]: E1212 19:43:35.568213 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:43:35.568494 containerd[1566]: time="2025-12-12T19:43:35.568264034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:43:35.929372 containerd[1566]: time="2025-12-12T19:43:35.929194431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:35.931219 containerd[1566]: time="2025-12-12T19:43:35.930939448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:43:35.931219 containerd[1566]: time="2025-12-12T19:43:35.930960885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 19:43:35.932210 kubelet[2884]: E1212 19:43:35.932046 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:43:35.932210 kubelet[2884]: E1212 19:43:35.932162 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:43:35.933336 kubelet[2884]: E1212 19:43:35.933149 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:35.936599 containerd[1566]: time="2025-12-12T19:43:35.936564316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:43:36.239292 containerd[1566]: time="2025-12-12T19:43:36.239049858Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:36.241533 containerd[1566]: time="2025-12-12T19:43:36.241461928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:43:36.241922 containerd[1566]: time="2025-12-12T19:43:36.241585958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 19:43:36.242209 kubelet[2884]: E1212 19:43:36.242137 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:43:36.242209 kubelet[2884]: E1212 19:43:36.242203 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:43:36.243107 kubelet[2884]: E1212 19:43:36.242611 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:36.244428 kubelet[2884]: E1212 19:43:36.243835 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:43:39.215969 systemd[1]: Started sshd@15-10.244.20.246:22-147.75.109.163:34428.service - OpenSSH per-connection server daemon (147.75.109.163:34428). Dec 12 19:43:40.178951 sshd[4922]: Accepted publickey for core from 147.75.109.163 port 34428 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:43:40.183026 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:43:40.198954 systemd-logind[1534]: New session 12 of user core. Dec 12 19:43:40.203628 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 19:43:41.244531 containerd[1566]: time="2025-12-12T19:43:41.244370994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:43:41.503104 sshd[4925]: Connection closed by 147.75.109.163 port 34428 Dec 12 19:43:41.502564 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Dec 12 19:43:41.512266 systemd[1]: sshd@15-10.244.20.246:22-147.75.109.163:34428.service: Deactivated successfully. Dec 12 19:43:41.515971 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 19:43:41.518189 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Dec 12 19:43:41.522610 systemd-logind[1534]: Removed session 12. Dec 12 19:43:41.561349 containerd[1566]: time="2025-12-12T19:43:41.561103168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:43:41.563425 containerd[1566]: time="2025-12-12T19:43:41.563258914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:43:41.563425 containerd[1566]: time="2025-12-12T19:43:41.563349587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:43:41.564114 kubelet[2884]: E1212 19:43:41.563797 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:41.564114 kubelet[2884]: E1212 19:43:41.563884 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:43:41.565478 kubelet[2884]: E1212 19:43:41.564533 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhfq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-ml9g6_calico-apiserver(fa81211e-8b3a-4af8-b6e2-d28a7d96f939): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:43:41.566889 kubelet[2884]: E1212 19:43:41.566833 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:43:42.242127 kubelet[2884]: E1212 19:43:42.241583 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:43:45.241016 kubelet[2884]: E1212 19:43:45.240778 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:43:45.246107 kubelet[2884]: E1212 19:43:45.245287 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:43:46.666274 systemd[1]: Started sshd@16-10.244.20.246:22-147.75.109.163:56462.service - OpenSSH per-connection server daemon (147.75.109.163:56462). Dec 12 19:43:47.241458 kubelet[2884]: E1212 19:43:47.240630 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:43:47.594243 sshd[4941]: Accepted publickey for core from 147.75.109.163 port 56462 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:43:47.596214 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:43:47.606185 systemd-logind[1534]: New session 13 of user core. Dec 12 19:43:47.615391 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 19:43:47.649542 systemd[1]: Started sshd@17-10.244.20.246:22-157.245.76.79:52906.service - OpenSSH per-connection server daemon (157.245.76.79:52906). Dec 12 19:43:47.777516 sshd[4946]: Invalid user webmaster from 157.245.76.79 port 52906 Dec 12 19:43:47.796172 sshd[4946]: Connection closed by invalid user webmaster 157.245.76.79 port 52906 [preauth] Dec 12 19:43:47.800811 systemd[1]: sshd@17-10.244.20.246:22-157.245.76.79:52906.service: Deactivated successfully. Dec 12 19:43:48.365112 sshd[4944]: Connection closed by 147.75.109.163 port 56462 Dec 12 19:43:48.367660 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Dec 12 19:43:48.375254 systemd[1]: sshd@16-10.244.20.246:22-147.75.109.163:56462.service: Deactivated successfully. Dec 12 19:43:48.378445 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 19:43:48.380794 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Dec 12 19:43:48.384488 systemd-logind[1534]: Removed session 13. Dec 12 19:43:49.242132 kubelet[2884]: E1212 19:43:49.241451 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:43:53.243013 kubelet[2884]: E1212 19:43:53.242942 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:43:53.524682 systemd[1]: Started sshd@18-10.244.20.246:22-147.75.109.163:60330.service - OpenSSH per-connection server daemon (147.75.109.163:60330). Dec 12 19:43:54.474862 sshd[4987]: Accepted publickey for core from 147.75.109.163 port 60330 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:43:54.476983 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:43:54.486869 systemd-logind[1534]: New session 14 of user core. Dec 12 19:43:54.492301 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 19:43:55.297312 sshd[4990]: Connection closed by 147.75.109.163 port 60330 Dec 12 19:43:55.297887 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Dec 12 19:43:55.306332 systemd[1]: sshd@18-10.244.20.246:22-147.75.109.163:60330.service: Deactivated successfully. Dec 12 19:43:55.310601 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 19:43:55.312688 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Dec 12 19:43:55.314554 systemd-logind[1534]: Removed session 14. Dec 12 19:43:55.459865 systemd[1]: Started sshd@19-10.244.20.246:22-147.75.109.163:60336.service - OpenSSH per-connection server daemon (147.75.109.163:60336). Dec 12 19:43:56.238003 kubelet[2884]: E1212 19:43:56.237929 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:43:56.388072 sshd[5003]: Accepted publickey for core from 147.75.109.163 port 60336 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:43:56.390148 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:43:56.400464 systemd-logind[1534]: New session 15 of user core. Dec 12 19:43:56.407309 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 19:43:57.245171 kubelet[2884]: E1212 19:43:57.242728 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:43:57.302180 sshd[5006]: Connection closed by 147.75.109.163 port 60336 Dec 12 19:43:57.303361 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Dec 12 19:43:57.311736 systemd[1]: sshd@19-10.244.20.246:22-147.75.109.163:60336.service: Deactivated successfully. Dec 12 19:43:57.314927 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 19:43:57.319771 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Dec 12 19:43:57.323177 systemd-logind[1534]: Removed session 15. Dec 12 19:43:57.459418 systemd[1]: Started sshd@20-10.244.20.246:22-147.75.109.163:60350.service - OpenSSH per-connection server daemon (147.75.109.163:60350). Dec 12 19:43:58.239847 kubelet[2884]: E1212 19:43:58.239775 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:43:58.396536 sshd[5016]: Accepted publickey for core from 147.75.109.163 port 60350 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:43:58.398538 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:43:58.407392 systemd-logind[1534]: New session 16 of user core. Dec 12 19:43:58.414403 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 19:43:59.153119 sshd[5020]: Connection closed by 147.75.109.163 port 60350 Dec 12 19:43:59.154033 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Dec 12 19:43:59.160242 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Dec 12 19:43:59.161671 systemd[1]: sshd@20-10.244.20.246:22-147.75.109.163:60350.service: Deactivated successfully. Dec 12 19:43:59.167412 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 19:43:59.170762 systemd-logind[1534]: Removed session 16. Dec 12 19:43:59.240882 kubelet[2884]: E1212 19:43:59.240731 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:44:04.244386 kubelet[2884]: E1212 19:44:04.244202 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:44:04.312732 systemd[1]: Started sshd@21-10.244.20.246:22-147.75.109.163:52066.service - OpenSSH per-connection server daemon (147.75.109.163:52066). Dec 12 19:44:05.250784 sshd[5039]: Accepted publickey for core from 147.75.109.163 port 52066 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:05.253722 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:05.264955 systemd-logind[1534]: New session 17 of user core. Dec 12 19:44:05.269295 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 19:44:06.052503 sshd[5042]: Connection closed by 147.75.109.163 port 52066 Dec 12 19:44:06.052359 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:06.059567 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Dec 12 19:44:06.060009 systemd[1]: sshd@21-10.244.20.246:22-147.75.109.163:52066.service: Deactivated successfully. Dec 12 19:44:06.064035 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 19:44:06.068653 systemd-logind[1534]: Removed session 17. Dec 12 19:44:06.241740 kubelet[2884]: E1212 19:44:06.241683 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:44:08.239991 kubelet[2884]: E1212 19:44:08.239751 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:44:08.239991 kubelet[2884]: E1212 19:44:08.239894 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:44:11.212208 systemd[1]: Started sshd@22-10.244.20.246:22-147.75.109.163:52070.service - OpenSSH per-connection server daemon (147.75.109.163:52070). Dec 12 19:44:12.135602 sshd[5058]: Accepted publickey for core from 147.75.109.163 port 52070 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:12.137804 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:12.149320 systemd-logind[1534]: New session 18 of user core. Dec 12 19:44:12.155530 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 19:44:12.243123 kubelet[2884]: E1212 19:44:12.241354 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:44:12.244418 kubelet[2884]: E1212 19:44:12.244308 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:44:12.442594 systemd[1]: Started sshd@23-10.244.20.246:22-157.245.76.79:55576.service - OpenSSH per-connection server daemon (157.245.76.79:55576). Dec 12 19:44:12.614260 sshd[5063]: Invalid user webmaster from 157.245.76.79 port 55576 Dec 12 19:44:12.631182 sshd[5063]: Connection closed by invalid user webmaster 157.245.76.79 port 55576 [preauth] Dec 12 19:44:12.637020 systemd[1]: sshd@23-10.244.20.246:22-157.245.76.79:55576.service: Deactivated successfully. Dec 12 19:44:12.960116 sshd[5061]: Connection closed by 147.75.109.163 port 52070 Dec 12 19:44:12.960400 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:12.965971 systemd[1]: sshd@22-10.244.20.246:22-147.75.109.163:52070.service: Deactivated successfully. Dec 12 19:44:12.969587 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 19:44:12.974168 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Dec 12 19:44:12.976866 systemd-logind[1534]: Removed session 18. Dec 12 19:44:15.241872 kubelet[2884]: E1212 19:44:15.241778 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:44:17.239440 kubelet[2884]: E1212 19:44:17.239327 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:44:18.119601 systemd[1]: Started sshd@24-10.244.20.246:22-147.75.109.163:40696.service - OpenSSH per-connection server daemon (147.75.109.163:40696). Dec 12 19:44:19.056635 sshd[5087]: Accepted publickey for core from 147.75.109.163 port 40696 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:19.059457 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:19.074384 systemd-logind[1534]: New session 19 of user core. Dec 12 19:44:19.078526 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 19:44:19.810992 sshd[5090]: Connection closed by 147.75.109.163 port 40696 Dec 12 19:44:19.811403 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:19.825211 systemd[1]: sshd@24-10.244.20.246:22-147.75.109.163:40696.service: Deactivated successfully. Dec 12 19:44:19.832465 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 19:44:19.835173 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Dec 12 19:44:19.838876 systemd-logind[1534]: Removed session 19. Dec 12 19:44:19.976213 systemd[1]: Started sshd@25-10.244.20.246:22-147.75.109.163:40712.service - OpenSSH per-connection server daemon (147.75.109.163:40712). Dec 12 19:44:20.902117 sshd[5102]: Accepted publickey for core from 147.75.109.163 port 40712 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:20.903262 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:20.912350 systemd-logind[1534]: New session 20 of user core. Dec 12 19:44:20.919802 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 19:44:22.068140 sshd[5105]: Connection closed by 147.75.109.163 port 40712 Dec 12 19:44:22.069621 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:22.081443 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Dec 12 19:44:22.084563 systemd[1]: sshd@25-10.244.20.246:22-147.75.109.163:40712.service: Deactivated successfully. Dec 12 19:44:22.088984 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 19:44:22.099697 systemd-logind[1534]: Removed session 20. Dec 12 19:44:22.232229 systemd[1]: Started sshd@26-10.244.20.246:22-147.75.109.163:40716.service - OpenSSH per-connection server daemon (147.75.109.163:40716). Dec 12 19:44:22.242167 containerd[1566]: time="2025-12-12T19:44:22.241887884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:44:22.555337 containerd[1566]: time="2025-12-12T19:44:22.554766580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:22.557116 containerd[1566]: time="2025-12-12T19:44:22.556948126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:44:22.557116 containerd[1566]: time="2025-12-12T19:44:22.557030643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 19:44:22.557568 kubelet[2884]: E1212 19:44:22.557471 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:44:22.558345 kubelet[2884]: E1212 19:44:22.557596 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:44:22.558345 kubelet[2884]: E1212 19:44:22.558075 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmdbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6dtpp_calico-system(9e1f5ae0-5750-4ed0-9230-cd71bbf186d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:22.560199 kubelet[2884]: E1212 19:44:22.559388 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:44:23.198041 sshd[5139]: Accepted publickey for core from 147.75.109.163 port 40716 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:23.200297 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:23.211185 systemd-logind[1534]: New session 21 of user core. Dec 12 19:44:23.218311 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 19:44:23.244023 containerd[1566]: time="2025-12-12T19:44:23.243960807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:44:23.567892 containerd[1566]: time="2025-12-12T19:44:23.567316550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:23.569591 containerd[1566]: time="2025-12-12T19:44:23.569541980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:44:23.571047 containerd[1566]: time="2025-12-12T19:44:23.569706074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:44:23.571144 kubelet[2884]: E1212 19:44:23.569987 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:44:23.571144 kubelet[2884]: E1212 19:44:23.570063 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:44:23.571144 kubelet[2884]: E1212 19:44:23.570413 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr58l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-rsckc_calico-apiserver(cb91d52e-eadb-42e8-8836-c86b003fbe7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:23.572448 containerd[1566]: time="2025-12-12T19:44:23.572158779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:44:23.572630 kubelet[2884]: E1212 19:44:23.572590 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:44:23.901959 containerd[1566]: time="2025-12-12T19:44:23.901368544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:23.907351 containerd[1566]: time="2025-12-12T19:44:23.907222908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:44:23.908556 containerd[1566]: time="2025-12-12T19:44:23.907553471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 19:44:23.908912 kubelet[2884]: E1212 19:44:23.907710 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:44:23.908912 kubelet[2884]: E1212 19:44:23.907770 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:44:23.908912 kubelet[2884]: E1212 19:44:23.907904 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e818186c85fb4d2e91b6612b2aa24cc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:23.914962 containerd[1566]: time="2025-12-12T19:44:23.914915207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:44:24.232953 containerd[1566]: time="2025-12-12T19:44:24.232557329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:24.235233 containerd[1566]: time="2025-12-12T19:44:24.235180710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:44:24.235434 containerd[1566]: time="2025-12-12T19:44:24.235341591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 19:44:24.235886 kubelet[2884]: E1212 19:44:24.235742 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:44:24.236206 kubelet[2884]: E1212 19:44:24.236146 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:44:24.238026 kubelet[2884]: E1212 19:44:24.237934 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-995586667-ttmxg_calico-system(28a1d6ff-e4a7-417e-8d50-93082a2b90ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:24.239529 kubelet[2884]: E1212 19:44:24.239380 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:44:24.879971 sshd[5142]: Connection closed by 147.75.109.163 port 40716 Dec 12 19:44:24.882331 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:24.894797 systemd[1]: sshd@26-10.244.20.246:22-147.75.109.163:40716.service: Deactivated successfully. Dec 12 19:44:24.900371 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 19:44:24.902363 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Dec 12 19:44:24.905363 systemd-logind[1534]: Removed session 21. Dec 12 19:44:25.068633 systemd[1]: Started sshd@27-10.244.20.246:22-147.75.109.163:38936.service - OpenSSH per-connection server daemon (147.75.109.163:38936). Dec 12 19:44:26.076822 sshd[5159]: Accepted publickey for core from 147.75.109.163 port 38936 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:26.078722 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:26.091166 systemd-logind[1534]: New session 22 of user core. Dec 12 19:44:26.103691 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 19:44:26.239518 containerd[1566]: time="2025-12-12T19:44:26.239459021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:44:26.550597 containerd[1566]: time="2025-12-12T19:44:26.550343498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:26.554640 containerd[1566]: time="2025-12-12T19:44:26.554375582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:44:26.555034 kubelet[2884]: E1212 19:44:26.554977 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:44:26.555968 kubelet[2884]: E1212 19:44:26.555052 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:44:26.555968 kubelet[2884]: E1212 19:44:26.555252 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:26.561650 containerd[1566]: time="2025-12-12T19:44:26.554527971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 19:44:26.562027 containerd[1566]: time="2025-12-12T19:44:26.558310887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:44:26.894662 containerd[1566]: time="2025-12-12T19:44:26.894082625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:26.898197 containerd[1566]: time="2025-12-12T19:44:26.896965714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:44:26.898477 containerd[1566]: time="2025-12-12T19:44:26.898333110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 19:44:26.898907 kubelet[2884]: E1212 19:44:26.898850 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:44:26.899017 kubelet[2884]: E1212 19:44:26.898926 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:44:26.899501 kubelet[2884]: E1212 19:44:26.899120 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbfhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v968r_calico-system(9bfee0fd-b637-401b-8c2c-b95c13a62022): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:26.901791 kubelet[2884]: E1212 19:44:26.900824 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:44:27.172117 sshd[5162]: Connection closed by 147.75.109.163 port 38936 Dec 12 19:44:27.173604 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:27.180967 systemd[1]: sshd@27-10.244.20.246:22-147.75.109.163:38936.service: Deactivated successfully. Dec 12 19:44:27.189532 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 19:44:27.194995 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Dec 12 19:44:27.198174 systemd-logind[1534]: Removed session 22. Dec 12 19:44:27.240611 containerd[1566]: time="2025-12-12T19:44:27.240544503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:44:27.318125 systemd[1]: Started sshd@28-10.244.20.246:22-147.75.109.163:38948.service - OpenSSH per-connection server daemon (147.75.109.163:38948). Dec 12 19:44:27.547576 containerd[1566]: time="2025-12-12T19:44:27.547365699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:27.548990 containerd[1566]: time="2025-12-12T19:44:27.548884442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:44:27.549144 containerd[1566]: time="2025-12-12T19:44:27.549003323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 19:44:27.549338 kubelet[2884]: E1212 19:44:27.549235 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:44:27.549338 kubelet[2884]: E1212 19:44:27.549315 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:44:27.549765 kubelet[2884]: E1212 19:44:27.549517 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2msb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56475989c-wt7ld_calico-system(95916008-465f-4755-98cd-82437c8d75be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:27.550839 kubelet[2884]: E1212 19:44:27.550705 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:44:28.239590 containerd[1566]: time="2025-12-12T19:44:28.239480657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:44:28.251195 sshd[5179]: Accepted publickey for core from 147.75.109.163 port 38948 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:28.254872 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:28.264922 systemd-logind[1534]: New session 23 of user core. Dec 12 19:44:28.272352 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 19:44:28.548555 containerd[1566]: time="2025-12-12T19:44:28.548352089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:44:28.550595 containerd[1566]: time="2025-12-12T19:44:28.550440349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:44:28.550595 containerd[1566]: time="2025-12-12T19:44:28.550456205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 19:44:28.550765 kubelet[2884]: E1212 19:44:28.550715 2884 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:44:28.551147 kubelet[2884]: E1212 19:44:28.550785 2884 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:44:28.551147 kubelet[2884]: E1212 19:44:28.550972 2884 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhfq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76c8ff9cd8-ml9g6_calico-apiserver(fa81211e-8b3a-4af8-b6e2-d28a7d96f939): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:44:28.552432 kubelet[2884]: E1212 19:44:28.552236 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:44:29.086115 sshd[5195]: Connection closed by 147.75.109.163 port 38948 Dec 12 19:44:29.085806 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:29.094809 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Dec 12 19:44:29.098858 systemd[1]: sshd@28-10.244.20.246:22-147.75.109.163:38948.service: Deactivated successfully. Dec 12 19:44:29.102900 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 19:44:29.106239 systemd-logind[1534]: Removed session 23. Dec 12 19:44:34.247665 systemd[1]: Started sshd@29-10.244.20.246:22-147.75.109.163:48448.service - OpenSSH per-connection server daemon (147.75.109.163:48448). Dec 12 19:44:35.209147 sshd[5209]: Accepted publickey for core from 147.75.109.163 port 48448 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:35.212566 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:35.226534 systemd-logind[1534]: New session 24 of user core. Dec 12 19:44:35.231312 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 19:44:35.241606 kubelet[2884]: E1212 19:44:35.241549 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:44:35.982287 sshd[5212]: Connection closed by 147.75.109.163 port 48448 Dec 12 19:44:35.982920 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:35.991714 systemd[1]: sshd@29-10.244.20.246:22-147.75.109.163:48448.service: Deactivated successfully. Dec 12 19:44:35.996745 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 19:44:35.998815 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Dec 12 19:44:36.002413 systemd-logind[1534]: Removed session 24. Dec 12 19:44:36.240364 kubelet[2884]: E1212 19:44:36.239772 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:44:37.041393 systemd[1]: Started sshd@30-10.244.20.246:22-157.245.76.79:50468.service - OpenSSH per-connection server daemon (157.245.76.79:50468). Dec 12 19:44:37.245194 kubelet[2884]: E1212 19:44:37.245122 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:44:37.381648 sshd[5224]: Invalid user nagios from 157.245.76.79 port 50468 Dec 12 19:44:37.465760 sshd[5224]: Connection closed by invalid user nagios 157.245.76.79 port 50468 [preauth] Dec 12 19:44:37.470026 systemd[1]: sshd@30-10.244.20.246:22-157.245.76.79:50468.service: Deactivated successfully. Dec 12 19:44:39.251598 kubelet[2884]: E1212 19:44:39.251424 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939" Dec 12 19:44:39.257399 kubelet[2884]: E1212 19:44:39.257284 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:44:41.138422 systemd[1]: Started sshd@31-10.244.20.246:22-147.75.109.163:48452.service - OpenSSH per-connection server daemon (147.75.109.163:48452). Dec 12 19:44:41.245788 kubelet[2884]: E1212 19:44:41.245696 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:44:42.067002 sshd[5233]: Accepted publickey for core from 147.75.109.163 port 48452 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:42.071712 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:42.083198 systemd-logind[1534]: New session 25 of user core. Dec 12 19:44:42.088406 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 19:44:42.852129 sshd[5236]: Connection closed by 147.75.109.163 port 48452 Dec 12 19:44:42.853191 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:42.860489 systemd[1]: sshd@31-10.244.20.246:22-147.75.109.163:48452.service: Deactivated successfully. Dec 12 19:44:42.865613 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 19:44:42.867424 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Dec 12 19:44:42.870174 systemd-logind[1534]: Removed session 25. Dec 12 19:44:48.016444 systemd[1]: Started sshd@32-10.244.20.246:22-147.75.109.163:51712.service - OpenSSH per-connection server daemon (147.75.109.163:51712). Dec 12 19:44:48.958306 sshd[5248]: Accepted publickey for core from 147.75.109.163 port 51712 ssh2: RSA SHA256:dtGVIBmi5GBDDRXWMHOUdZ7AMlcejJgaHwElsZPMiqo Dec 12 19:44:48.960725 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:44:48.971624 systemd-logind[1534]: New session 26 of user core. Dec 12 19:44:48.977402 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 19:44:49.717390 sshd[5251]: Connection closed by 147.75.109.163 port 51712 Dec 12 19:44:49.718584 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Dec 12 19:44:49.727762 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Dec 12 19:44:49.729746 systemd[1]: sshd@32-10.244.20.246:22-147.75.109.163:51712.service: Deactivated successfully. Dec 12 19:44:49.735320 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 19:44:49.740240 systemd-logind[1534]: Removed session 26. Dec 12 19:44:50.239716 kubelet[2884]: E1212 19:44:50.239616 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-rsckc" podUID="cb91d52e-eadb-42e8-8836-c86b003fbe7b" Dec 12 19:44:50.244842 kubelet[2884]: E1212 19:44:50.244704 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-995586667-ttmxg" podUID="28a1d6ff-e4a7-417e-8d50-93082a2b90ea" Dec 12 19:44:50.245940 kubelet[2884]: E1212 19:44:50.245750 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v968r" podUID="9bfee0fd-b637-401b-8c2c-b95c13a62022" Dec 12 19:44:52.240332 kubelet[2884]: E1212 19:44:52.240258 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6dtpp" podUID="9e1f5ae0-5750-4ed0-9230-cd71bbf186d8" Dec 12 19:44:53.241619 kubelet[2884]: E1212 19:44:53.241499 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56475989c-wt7ld" podUID="95916008-465f-4755-98cd-82437c8d75be" Dec 12 19:44:53.244569 kubelet[2884]: E1212 19:44:53.242346 2884 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76c8ff9cd8-ml9g6" podUID="fa81211e-8b3a-4af8-b6e2-d28a7d96f939"