Nov 24 01:44:05.930108 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 01:44:05.930144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 01:44:05.930158 kernel: BIOS-provided physical RAM map: Nov 24 01:44:05.930168 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 24 01:44:05.930184 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 24 01:44:05.930194 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 01:44:05.930206 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 24 01:44:05.930223 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 24 01:44:05.930234 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 24 01:44:05.930244 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 24 01:44:05.930255 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 01:44:05.930265 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 01:44:05.930276 kernel: NX (Execute Disable) protection: active Nov 24 01:44:05.930291 kernel: APIC: Static calls initialized Nov 24 01:44:05.930304 kernel: SMBIOS 2.8 present. Nov 24 01:44:05.930315 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 24 01:44:05.930332 kernel: DMI: Memory slots populated: 1/1 Nov 24 01:44:05.930344 kernel: Hypervisor detected: KVM Nov 24 01:44:05.930355 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 24 01:44:05.930372 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 01:44:05.930383 kernel: kvm-clock: using sched offset of 6700688122 cycles Nov 24 01:44:05.930395 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 01:44:05.930407 kernel: tsc: Detected 2799.998 MHz processor Nov 24 01:44:05.930419 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 01:44:05.930430 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 01:44:05.930442 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 24 01:44:05.930453 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 01:44:05.930465 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 01:44:05.930481 kernel: Using GB pages for direct mapping Nov 24 01:44:05.930492 kernel: ACPI: Early table checksum verification disabled Nov 24 01:44:05.930503 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 24 01:44:05.930515 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930526 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930537 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930549 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 24 01:44:05.930560 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930571 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930587 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930599 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 01:44:05.930629 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 24 01:44:05.930651 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 24 01:44:05.930663 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 24 01:44:05.930675 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 24 01:44:05.930691 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 24 01:44:05.930703 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 24 01:44:05.930715 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 24 01:44:05.930726 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 24 01:44:05.930738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 24 01:44:05.930750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 24 01:44:05.930762 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Nov 24 01:44:05.930774 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Nov 24 01:44:05.930791 kernel: Zone ranges: Nov 24 01:44:05.930803 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 01:44:05.930814 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 24 01:44:05.930826 kernel: Normal empty Nov 24 01:44:05.930838 kernel: Device empty Nov 24 01:44:05.930850 kernel: Movable zone start for each node Nov 24 01:44:05.930861 kernel: Early memory node ranges Nov 24 01:44:05.930873 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 01:44:05.930885 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 24 01:44:05.930896 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 24 01:44:05.930913 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 01:44:05.930924 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 01:44:05.931002 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 24 01:44:05.931017 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 01:44:05.931032 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 01:44:05.931045 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 01:44:05.931057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 01:44:05.931069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 01:44:05.931081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 01:44:05.931099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 01:44:05.931111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 01:44:05.931123 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 01:44:05.931135 kernel: TSC deadline timer available Nov 24 01:44:05.931146 kernel: CPU topo: Max. logical packages: 16 Nov 24 01:44:05.931158 kernel: CPU topo: Max. logical dies: 16 Nov 24 01:44:05.931170 kernel: CPU topo: Max. dies per package: 1 Nov 24 01:44:05.931182 kernel: CPU topo: Max. threads per core: 1 Nov 24 01:44:05.931193 kernel: CPU topo: Num. cores per package: 1 Nov 24 01:44:05.931210 kernel: CPU topo: Num. threads per package: 1 Nov 24 01:44:05.931222 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Nov 24 01:44:05.931234 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 01:44:05.931245 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 24 01:44:05.931257 kernel: Booting paravirtualized kernel on KVM Nov 24 01:44:05.931269 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 01:44:05.931281 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 24 01:44:05.931293 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 24 01:44:05.931305 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 24 01:44:05.931321 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 24 01:44:05.931333 kernel: kvm-guest: PV spinlocks enabled Nov 24 01:44:05.931345 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 01:44:05.931358 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 01:44:05.931371 kernel: random: crng init done Nov 24 01:44:05.931382 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 01:44:05.931394 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 01:44:05.931406 kernel: Fallback order for Node 0: 0 Nov 24 01:44:05.931422 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Nov 24 01:44:05.931434 kernel: Policy zone: DMA32 Nov 24 01:44:05.931446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 01:44:05.931458 kernel: software IO TLB: area num 16. Nov 24 01:44:05.931470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 24 01:44:05.931482 kernel: Kernel/User page tables isolation: enabled Nov 24 01:44:05.931494 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 01:44:05.931505 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 01:44:05.931517 kernel: Dynamic Preempt: voluntary Nov 24 01:44:05.931534 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 01:44:05.931547 kernel: rcu: RCU event tracing is enabled. Nov 24 01:44:05.931559 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 24 01:44:05.931571 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 01:44:05.931588 kernel: Rude variant of Tasks RCU enabled. Nov 24 01:44:05.931601 kernel: Tracing variant of Tasks RCU enabled. Nov 24 01:44:05.931639 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 01:44:05.931654 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 24 01:44:05.931667 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 24 01:44:05.931685 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 24 01:44:05.931697 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 24 01:44:05.931709 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 24 01:44:05.931721 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 01:44:05.931744 kernel: Console: colour VGA+ 80x25 Nov 24 01:44:05.931760 kernel: printk: legacy console [tty0] enabled Nov 24 01:44:05.931773 kernel: printk: legacy console [ttyS0] enabled Nov 24 01:44:05.931785 kernel: ACPI: Core revision 20240827 Nov 24 01:44:05.931803 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 01:44:05.931816 kernel: x2apic enabled Nov 24 01:44:05.931829 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 01:44:05.931842 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Nov 24 01:44:05.931859 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Nov 24 01:44:05.931871 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 01:44:05.931884 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 24 01:44:05.931896 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 24 01:44:05.931908 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 01:44:05.931925 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 01:44:05.931948 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 01:44:05.931961 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 24 01:44:05.931973 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 01:44:05.931985 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 01:44:05.931997 kernel: MDS: Mitigation: Clear CPU buffers Nov 24 01:44:05.932010 kernel: MMIO Stale Data: Unknown: No mitigations Nov 24 01:44:05.932022 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 24 01:44:05.932033 kernel: active return thunk: its_return_thunk Nov 24 01:44:05.932046 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 01:44:05.932058 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 01:44:05.932075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 01:44:05.932087 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 01:44:05.932100 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 01:44:05.932112 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 24 01:44:05.932124 kernel: Freeing SMP alternatives memory: 32K Nov 24 01:44:05.932136 kernel: pid_max: default: 32768 minimum: 301 Nov 24 01:44:05.932148 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 01:44:05.932161 kernel: landlock: Up and running. Nov 24 01:44:05.932173 kernel: SELinux: Initializing. Nov 24 01:44:05.932185 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 01:44:05.932198 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 01:44:05.932215 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 24 01:44:05.932227 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 24 01:44:05.932240 kernel: signal: max sigframe size: 1776 Nov 24 01:44:05.932257 kernel: rcu: Hierarchical SRCU implementation. Nov 24 01:44:05.932271 kernel: rcu: Max phase no-delay instances is 400. Nov 24 01:44:05.932283 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Nov 24 01:44:05.932296 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 01:44:05.932308 kernel: smp: Bringing up secondary CPUs ... Nov 24 01:44:05.932320 kernel: smpboot: x86: Booting SMP configuration: Nov 24 01:44:05.932337 kernel: .... node #0, CPUs: #1 Nov 24 01:44:05.932350 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 01:44:05.932362 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Nov 24 01:44:05.932375 kernel: Memory: 1887476K/2096616K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 203124K reserved, 0K cma-reserved) Nov 24 01:44:05.932388 kernel: devtmpfs: initialized Nov 24 01:44:05.932400 kernel: x86/mm: Memory block size: 128MB Nov 24 01:44:05.932413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 01:44:05.932425 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 24 01:44:05.932437 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 01:44:05.932454 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 01:44:05.932467 kernel: audit: initializing netlink subsys (disabled) Nov 24 01:44:05.932479 kernel: audit: type=2000 audit(1763948641.667:1): state=initialized audit_enabled=0 res=1 Nov 24 01:44:05.932492 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 01:44:05.932504 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 01:44:05.932516 kernel: cpuidle: using governor menu Nov 24 01:44:05.932529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 01:44:05.932541 kernel: dca service started, version 1.12.1 Nov 24 01:44:05.932553 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 24 01:44:05.932576 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 24 01:44:05.932601 kernel: PCI: Using configuration type 1 for base access Nov 24 01:44:05.932630 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 01:44:05.932643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 01:44:05.932656 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 01:44:05.932668 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 01:44:05.932681 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 01:44:05.932693 kernel: ACPI: Added _OSI(Module Device) Nov 24 01:44:05.932715 kernel: ACPI: Added _OSI(Processor Device) Nov 24 01:44:05.932734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 01:44:05.932746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 01:44:05.932759 kernel: ACPI: Interpreter enabled Nov 24 01:44:05.932771 kernel: ACPI: PM: (supports S0 S5) Nov 24 01:44:05.932783 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 01:44:05.932796 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 01:44:05.932808 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 01:44:05.932821 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 01:44:05.932833 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 01:44:05.933120 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 01:44:05.933295 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 24 01:44:05.933462 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 24 01:44:05.933481 kernel: PCI host bridge to bus 0000:00 Nov 24 01:44:05.933752 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 01:44:05.933952 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 01:44:05.934116 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 01:44:05.934312 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 24 01:44:05.934465 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 24 01:44:05.934725 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 24 01:44:05.934890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 01:44:05.935112 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 01:44:05.935320 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Nov 24 01:44:05.935497 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Nov 24 01:44:05.935677 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Nov 24 01:44:05.935842 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Nov 24 01:44:05.936030 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 01:44:05.936227 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.936401 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Nov 24 01:44:05.936574 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 24 01:44:05.937155 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 24 01:44:05.937324 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 24 01:44:05.937498 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.937684 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Nov 24 01:44:05.937878 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 24 01:44:05.938057 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 24 01:44:05.938273 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 24 01:44:05.938478 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.938700 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Nov 24 01:44:05.938868 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 24 01:44:05.939086 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 24 01:44:05.939255 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 24 01:44:05.941648 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.941864 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Nov 24 01:44:05.942072 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 24 01:44:05.942243 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 24 01:44:05.942410 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 24 01:44:05.942604 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.942803 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Nov 24 01:44:05.943019 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 24 01:44:05.943187 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 24 01:44:05.943403 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 24 01:44:05.943580 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.945841 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Nov 24 01:44:05.946038 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 24 01:44:05.946221 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 24 01:44:05.946401 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 24 01:44:05.946627 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.947306 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Nov 24 01:44:05.947474 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 24 01:44:05.947674 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 24 01:44:05.947842 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 24 01:44:05.948039 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 24 01:44:05.948205 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Nov 24 01:44:05.948377 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 24 01:44:05.948539 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 24 01:44:05.948735 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 24 01:44:05.948911 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 01:44:05.949088 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Nov 24 01:44:05.949251 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Nov 24 01:44:05.949413 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Nov 24 01:44:05.949659 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Nov 24 01:44:05.949836 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 01:44:05.950014 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Nov 24 01:44:05.950178 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Nov 24 01:44:05.950385 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Nov 24 01:44:05.950598 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 01:44:05.950801 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 01:44:05.951014 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 01:44:05.951180 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Nov 24 01:44:05.951342 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Nov 24 01:44:05.951532 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 01:44:05.951735 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 24 01:44:05.951942 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Nov 24 01:44:05.952124 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Nov 24 01:44:05.952294 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 24 01:44:05.952498 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 24 01:44:05.952710 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 24 01:44:05.952916 kernel: pci_bus 0000:02: extended config space not accessible Nov 24 01:44:05.953133 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Nov 24 01:44:05.953312 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Nov 24 01:44:05.953493 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 24 01:44:05.953689 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 24 01:44:05.953861 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Nov 24 01:44:05.954045 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 24 01:44:05.954235 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 24 01:44:05.954407 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Nov 24 01:44:05.954573 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 24 01:44:05.954777 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 24 01:44:05.954955 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 24 01:44:05.955122 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 24 01:44:05.955287 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 24 01:44:05.955497 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 24 01:44:05.955518 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 01:44:05.955531 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 01:44:05.955552 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 01:44:05.955564 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 01:44:05.955577 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 01:44:05.955590 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 01:44:05.955602 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 01:44:05.955630 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 01:44:05.955644 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 01:44:05.955657 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 01:44:05.955670 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 01:44:05.955688 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 01:44:05.955700 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 01:44:05.955713 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 01:44:05.955726 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 01:44:05.955738 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 01:44:05.955751 kernel: iommu: Default domain type: Translated Nov 24 01:44:05.955764 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 01:44:05.955776 kernel: PCI: Using ACPI for IRQ routing Nov 24 01:44:05.955788 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 01:44:05.955806 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 24 01:44:05.955819 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 24 01:44:05.956000 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 01:44:05.956165 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 01:44:05.956328 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 01:44:05.956347 kernel: vgaarb: loaded Nov 24 01:44:05.956359 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 01:44:05.956372 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 01:44:05.956391 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 01:44:05.956404 kernel: pnp: PnP ACPI init Nov 24 01:44:05.956594 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 24 01:44:05.956647 kernel: pnp: PnP ACPI: found 5 devices Nov 24 01:44:05.956663 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 01:44:05.956676 kernel: NET: Registered PF_INET protocol family Nov 24 01:44:05.956689 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 01:44:05.956702 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 24 01:44:05.956714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 01:44:05.956734 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 01:44:05.956747 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 24 01:44:05.956759 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 24 01:44:05.956772 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 01:44:05.956785 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 01:44:05.956797 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 01:44:05.956810 kernel: NET: Registered PF_XDP protocol family Nov 24 01:44:05.956987 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 24 01:44:05.957159 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 24 01:44:05.957323 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 24 01:44:05.957508 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 24 01:44:05.957697 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 24 01:44:05.957863 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 24 01:44:05.958039 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 24 01:44:05.958204 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 24 01:44:05.958368 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 24 01:44:05.958579 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 24 01:44:05.960834 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 24 01:44:05.961037 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 24 01:44:05.961212 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 24 01:44:05.961418 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 24 01:44:05.961588 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 24 01:44:05.962796 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 24 01:44:05.962998 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 24 01:44:05.963200 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 24 01:44:05.963384 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 24 01:44:05.963576 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 24 01:44:05.966807 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 24 01:44:05.967006 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 24 01:44:05.967184 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 24 01:44:05.967357 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 24 01:44:05.967527 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 24 01:44:05.967730 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 24 01:44:05.967911 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 24 01:44:05.968103 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 24 01:44:05.968270 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 24 01:44:05.968445 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 24 01:44:05.971645 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 24 01:44:05.971850 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 24 01:44:05.972048 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 24 01:44:05.972218 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 24 01:44:05.972386 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 24 01:44:05.972551 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 24 01:44:05.973775 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 24 01:44:05.973974 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 24 01:44:05.974147 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 24 01:44:05.974314 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 24 01:44:05.974479 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 24 01:44:05.974665 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 24 01:44:05.974841 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 24 01:44:05.975021 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 24 01:44:05.975188 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 24 01:44:05.975356 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 24 01:44:05.975540 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 24 01:44:05.981822 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 24 01:44:05.982033 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 24 01:44:05.982216 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 24 01:44:05.982379 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 01:44:05.982534 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 01:44:05.982735 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 01:44:05.982895 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 24 01:44:05.983076 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 24 01:44:05.983238 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 24 01:44:05.983701 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 24 01:44:05.983874 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 24 01:44:05.984051 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 24 01:44:05.984461 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 24 01:44:05.984700 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 24 01:44:05.984870 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 24 01:44:05.985046 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 24 01:44:05.985240 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 24 01:44:05.985408 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 24 01:44:05.985573 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 24 01:44:05.985786 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 24 01:44:05.985965 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 24 01:44:05.986130 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 24 01:44:05.986305 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 24 01:44:05.986464 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 24 01:44:05.986740 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 24 01:44:05.986998 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 24 01:44:05.987159 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 24 01:44:05.987315 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 24 01:44:05.987506 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 24 01:44:05.987686 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 24 01:44:05.987869 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 24 01:44:05.988060 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 24 01:44:05.988220 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 24 01:44:05.988375 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 24 01:44:05.988396 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 01:44:05.988417 kernel: PCI: CLS 0 bytes, default 64 Nov 24 01:44:05.988430 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 24 01:44:05.988444 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 24 01:44:05.988457 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 24 01:44:05.988471 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Nov 24 01:44:05.988485 kernel: Initialise system trusted keyrings Nov 24 01:44:05.988498 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 24 01:44:05.988512 kernel: Key type asymmetric registered Nov 24 01:44:05.988525 kernel: Asymmetric key parser 'x509' registered Nov 24 01:44:05.988542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 01:44:05.988555 kernel: io scheduler mq-deadline registered Nov 24 01:44:05.988569 kernel: io scheduler kyber registered Nov 24 01:44:05.988582 kernel: io scheduler bfq registered Nov 24 01:44:05.988777 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 24 01:44:05.990593 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 24 01:44:05.990789 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.990982 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 24 01:44:05.991151 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 24 01:44:05.991319 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.991485 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 24 01:44:05.991667 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 24 01:44:05.991835 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.992029 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 24 01:44:05.994365 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 24 01:44:05.994555 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.994756 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 24 01:44:05.994967 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 24 01:44:05.995148 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.995331 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 24 01:44:05.995501 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 24 01:44:05.995699 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.995873 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 24 01:44:05.996058 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 24 01:44:05.996229 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.996416 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 24 01:44:05.996584 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 24 01:44:05.996770 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 24 01:44:05.996791 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 01:44:05.996810 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 01:44:05.996824 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 24 01:44:05.996837 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 01:44:05.996851 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 01:44:05.996871 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 01:44:05.996885 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 01:44:05.996902 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 01:44:05.997093 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 24 01:44:05.997116 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 01:44:05.997271 kernel: rtc_cmos 00:03: registered as rtc0 Nov 24 01:44:05.997431 kernel: rtc_cmos 00:03: setting system clock to 2025-11-24T01:44:05 UTC (1763948645) Nov 24 01:44:05.997590 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 24 01:44:05.997629 kernel: intel_pstate: CPU model not supported Nov 24 01:44:05.997653 kernel: NET: Registered PF_INET6 protocol family Nov 24 01:44:05.997667 kernel: Segment Routing with IPv6 Nov 24 01:44:05.997680 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 01:44:05.997693 kernel: NET: Registered PF_PACKET protocol family Nov 24 01:44:05.997706 kernel: Key type dns_resolver registered Nov 24 01:44:05.997719 kernel: IPI shorthand broadcast: enabled Nov 24 01:44:05.997732 kernel: sched_clock: Marking stable (3532004231, 218989380)->(3896873779, -145880168) Nov 24 01:44:05.997746 kernel: registered taskstats version 1 Nov 24 01:44:05.997765 kernel: Loading compiled-in X.509 certificates Nov 24 01:44:05.997778 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 01:44:05.997791 kernel: Demotion targets for Node 0: null Nov 24 01:44:05.997804 kernel: Key type .fscrypt registered Nov 24 01:44:05.997817 kernel: Key type fscrypt-provisioning registered Nov 24 01:44:05.997829 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 01:44:05.997843 kernel: ima: Allocated hash algorithm: sha1 Nov 24 01:44:05.997856 kernel: ima: No architecture policies found Nov 24 01:44:05.997869 kernel: clk: Disabling unused clocks Nov 24 01:44:05.997887 kernel: Warning: unable to open an initial console. Nov 24 01:44:05.997900 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 01:44:05.997913 kernel: Write protecting the kernel read-only data: 40960k Nov 24 01:44:05.997945 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 01:44:05.997959 kernel: Run /init as init process Nov 24 01:44:05.997972 kernel: with arguments: Nov 24 01:44:05.997985 kernel: /init Nov 24 01:44:05.997998 kernel: with environment: Nov 24 01:44:05.998011 kernel: HOME=/ Nov 24 01:44:05.998029 kernel: TERM=linux Nov 24 01:44:05.998044 systemd[1]: Successfully made /usr/ read-only. Nov 24 01:44:05.998061 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 01:44:05.998075 systemd[1]: Detected virtualization kvm. Nov 24 01:44:05.998088 systemd[1]: Detected architecture x86-64. Nov 24 01:44:05.998102 systemd[1]: Running in initrd. Nov 24 01:44:05.998116 systemd[1]: No hostname configured, using default hostname. Nov 24 01:44:05.998134 systemd[1]: Hostname set to . Nov 24 01:44:05.998148 systemd[1]: Initializing machine ID from VM UUID. Nov 24 01:44:05.998161 systemd[1]: Queued start job for default target initrd.target. Nov 24 01:44:05.998175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 01:44:05.998189 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 01:44:05.998216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 01:44:05.998229 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 01:44:05.998243 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 01:44:05.998262 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 01:44:05.998276 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 01:44:05.998289 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 01:44:05.998303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 01:44:05.998316 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 01:44:05.998330 systemd[1]: Reached target paths.target - Path Units. Nov 24 01:44:05.998343 systemd[1]: Reached target slices.target - Slice Units. Nov 24 01:44:05.998373 systemd[1]: Reached target swap.target - Swaps. Nov 24 01:44:05.998387 systemd[1]: Reached target timers.target - Timer Units. Nov 24 01:44:05.998401 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 01:44:05.998415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 01:44:05.998446 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 01:44:05.998459 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 01:44:05.998473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 01:44:05.998486 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 01:44:05.998512 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 01:44:05.998529 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 01:44:05.998543 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 01:44:05.998556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 01:44:05.998582 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 01:44:05.998595 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 01:44:05.998609 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 01:44:05.998628 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 01:44:05.998654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 01:44:05.998673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 01:44:05.998698 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 01:44:05.998753 systemd-journald[210]: Collecting audit messages is disabled. Nov 24 01:44:05.998790 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 01:44:05.998805 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 01:44:05.998819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 01:44:05.998834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 01:44:05.998847 kernel: Bridge firewalling registered Nov 24 01:44:05.998865 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 01:44:05.998880 systemd-journald[210]: Journal started Nov 24 01:44:05.998904 systemd-journald[210]: Runtime Journal (/run/log/journal/e9a01c46b50e4c729222394e62ccab03) is 4.7M, max 37.8M, 33.1M free. Nov 24 01:44:05.933942 systemd-modules-load[211]: Inserted module 'overlay' Nov 24 01:44:05.985740 systemd-modules-load[211]: Inserted module 'br_netfilter' Nov 24 01:44:06.005427 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 01:44:06.005456 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 01:44:06.074030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 01:44:06.080987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 01:44:06.087793 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 01:44:06.091774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 01:44:06.094792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 01:44:06.098065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 01:44:06.113082 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 01:44:06.115817 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 01:44:06.123592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 01:44:06.127876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 01:44:06.133821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 01:44:06.136779 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 01:44:06.162684 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 01:44:06.187396 systemd-resolved[246]: Positive Trust Anchors: Nov 24 01:44:06.188392 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 01:44:06.188435 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 01:44:06.195702 systemd-resolved[246]: Defaulting to hostname 'linux'. Nov 24 01:44:06.200695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 01:44:06.201449 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 01:44:06.273650 kernel: SCSI subsystem initialized Nov 24 01:44:06.284714 kernel: Loading iSCSI transport class v2.0-870. Nov 24 01:44:06.297668 kernel: iscsi: registered transport (tcp) Nov 24 01:44:06.323769 kernel: iscsi: registered transport (qla4xxx) Nov 24 01:44:06.323879 kernel: QLogic iSCSI HBA Driver Nov 24 01:44:06.348460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 01:44:06.367802 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 01:44:06.369318 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 01:44:06.432471 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 01:44:06.436261 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 01:44:06.492670 kernel: raid6: sse2x4 gen() 13959 MB/s Nov 24 01:44:06.510654 kernel: raid6: sse2x2 gen() 9750 MB/s Nov 24 01:44:06.529211 kernel: raid6: sse2x1 gen() 9792 MB/s Nov 24 01:44:06.529303 kernel: raid6: using algorithm sse2x4 gen() 13959 MB/s Nov 24 01:44:06.548238 kernel: raid6: .... xor() 7773 MB/s, rmw enabled Nov 24 01:44:06.548324 kernel: raid6: using ssse3x2 recovery algorithm Nov 24 01:44:06.573709 kernel: xor: automatically using best checksumming function avx Nov 24 01:44:06.786670 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 01:44:06.795527 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 01:44:06.798695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 01:44:06.829887 systemd-udevd[459]: Using default interface naming scheme 'v255'. Nov 24 01:44:06.839363 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 01:44:06.844765 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 01:44:06.876090 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Nov 24 01:44:06.910404 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 01:44:06.917454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 01:44:07.052974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 01:44:07.058225 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 01:44:07.162086 kernel: ACPI: bus type USB registered Nov 24 01:44:07.162168 kernel: usbcore: registered new interface driver usbfs Nov 24 01:44:07.163579 kernel: usbcore: registered new interface driver hub Nov 24 01:44:07.165639 kernel: usbcore: registered new device driver usb Nov 24 01:44:07.184649 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 24 01:44:07.190769 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 24 01:44:07.209064 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 01:44:07.209131 kernel: GPT:17805311 != 125829119 Nov 24 01:44:07.209152 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 01:44:07.209169 kernel: GPT:17805311 != 125829119 Nov 24 01:44:07.209195 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 01:44:07.209212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 01:44:07.214631 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 24 01:44:07.217748 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 01:44:07.225655 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 24 01:44:07.236670 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 24 01:44:07.241642 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 24 01:44:07.243640 kernel: AES CTR mode by8 optimization enabled Nov 24 01:44:07.246815 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 01:44:07.247009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 01:44:07.248898 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 01:44:07.258068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 01:44:07.262404 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 01:44:07.292666 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 24 01:44:07.299636 kernel: libata version 3.00 loaded. Nov 24 01:44:07.306033 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 24 01:44:07.309643 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 24 01:44:07.310297 kernel: hub 1-0:1.0: USB hub found Nov 24 01:44:07.311970 kernel: hub 1-0:1.0: 4 ports detected Nov 24 01:44:07.317201 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 24 01:44:07.319643 kernel: hub 2-0:1.0: USB hub found Nov 24 01:44:07.323643 kernel: hub 2-0:1.0: 4 ports detected Nov 24 01:44:07.384366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 01:44:07.456693 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 01:44:07.456990 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 01:44:07.457013 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 01:44:07.457217 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 01:44:07.457448 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 01:44:07.457666 kernel: scsi host0: ahci Nov 24 01:44:07.457914 kernel: scsi host1: ahci Nov 24 01:44:07.458115 kernel: scsi host2: ahci Nov 24 01:44:07.458337 kernel: scsi host3: ahci Nov 24 01:44:07.458545 kernel: scsi host4: ahci Nov 24 01:44:07.458984 kernel: scsi host5: ahci Nov 24 01:44:07.459205 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Nov 24 01:44:07.459226 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Nov 24 01:44:07.459244 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Nov 24 01:44:07.459268 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Nov 24 01:44:07.459285 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Nov 24 01:44:07.459302 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Nov 24 01:44:07.455653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 01:44:07.485281 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 01:44:07.486184 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 01:44:07.500037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 01:44:07.512825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 01:44:07.515842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 01:44:07.540668 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 01:44:07.542085 disk-uuid[610]: Primary Header is updated. Nov 24 01:44:07.542085 disk-uuid[610]: Secondary Entries is updated. Nov 24 01:44:07.542085 disk-uuid[610]: Secondary Header is updated. Nov 24 01:44:07.554684 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 24 01:44:07.701777 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 24 01:44:07.716808 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 24 01:44:07.716854 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 01:44:07.716874 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 01:44:07.717647 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 01:44:07.720834 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 01:44:07.720873 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 01:44:07.736153 kernel: usbcore: registered new interface driver usbhid Nov 24 01:44:07.736209 kernel: usbhid: USB HID core driver Nov 24 01:44:07.743887 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Nov 24 01:44:07.743960 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 24 01:44:07.759083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 01:44:07.760939 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 01:44:07.762566 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 01:44:07.763325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 01:44:07.766039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 01:44:07.794437 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 01:44:08.555677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 01:44:08.557282 disk-uuid[612]: The operation has completed successfully. Nov 24 01:44:08.621317 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 01:44:08.621481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 01:44:08.657057 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 01:44:08.675981 sh[637]: Success Nov 24 01:44:08.702023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 01:44:08.702158 kernel: device-mapper: uevent: version 1.0.3 Nov 24 01:44:08.702181 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 01:44:08.716637 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Nov 24 01:44:08.762554 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 01:44:08.764318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 01:44:08.773085 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 01:44:08.788702 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (649) Nov 24 01:44:08.791852 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 01:44:08.791900 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 01:44:08.802945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 01:44:08.803000 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 01:44:08.809357 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 01:44:08.810652 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 01:44:08.811491 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 01:44:08.812532 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 01:44:08.815804 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 01:44:08.854258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (683) Nov 24 01:44:08.854341 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 01:44:08.856398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 01:44:08.863484 kernel: BTRFS info (device vda6): turning on async discard Nov 24 01:44:08.863531 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 01:44:08.870826 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 01:44:08.871988 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 01:44:08.874863 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 01:44:08.987387 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 01:44:08.991834 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 01:44:09.042596 systemd-networkd[819]: lo: Link UP Nov 24 01:44:09.043514 systemd-networkd[819]: lo: Gained carrier Nov 24 01:44:09.049901 systemd-networkd[819]: Enumeration completed Nov 24 01:44:09.050432 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 01:44:09.050438 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 01:44:09.052780 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 01:44:09.055361 systemd-networkd[819]: eth0: Link UP Nov 24 01:44:09.055584 systemd-networkd[819]: eth0: Gained carrier Nov 24 01:44:09.055601 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 01:44:09.060790 systemd[1]: Reached target network.target - Network. Nov 24 01:44:09.210052 systemd-networkd[819]: eth0: DHCPv4 address 10.230.76.74/30, gateway 10.230.76.73 acquired from 10.230.76.73 Nov 24 01:44:09.284568 ignition[737]: Ignition 2.22.0 Nov 24 01:44:09.284590 ignition[737]: Stage: fetch-offline Nov 24 01:44:09.284688 ignition[737]: no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:09.287279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 01:44:09.284728 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:09.284926 ignition[737]: parsed url from cmdline: "" Nov 24 01:44:09.284933 ignition[737]: no config URL provided Nov 24 01:44:09.284949 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 01:44:09.284971 ignition[737]: no config at "/usr/lib/ignition/user.ign" Nov 24 01:44:09.285005 ignition[737]: failed to fetch config: resource requires networking Nov 24 01:44:09.291867 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 01:44:09.285533 ignition[737]: Ignition finished successfully Nov 24 01:44:09.331683 ignition[828]: Ignition 2.22.0 Nov 24 01:44:09.331706 ignition[828]: Stage: fetch Nov 24 01:44:09.331974 ignition[828]: no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:09.331993 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:09.332153 ignition[828]: parsed url from cmdline: "" Nov 24 01:44:09.332160 ignition[828]: no config URL provided Nov 24 01:44:09.332170 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 01:44:09.332186 ignition[828]: no config at "/usr/lib/ignition/user.ign" Nov 24 01:44:09.332395 ignition[828]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 24 01:44:09.333705 ignition[828]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 24 01:44:09.333733 ignition[828]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 24 01:44:09.354441 ignition[828]: GET result: OK Nov 24 01:44:09.358297 ignition[828]: parsing config with SHA512: d637a6c2042a7a8428e6d413b4ff175489673fb107d1890f553fbe48d51836407901b8d193491493f0a1e1280551de61a5362cfaf251c1bc8a2e9e95215964cd Nov 24 01:44:09.368065 unknown[828]: fetched base config from "system" Nov 24 01:44:09.368091 unknown[828]: fetched base config from "system" Nov 24 01:44:09.369156 ignition[828]: fetch: fetch complete Nov 24 01:44:09.368188 unknown[828]: fetched user config from "openstack" Nov 24 01:44:09.369165 ignition[828]: fetch: fetch passed Nov 24 01:44:09.371663 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 01:44:09.369233 ignition[828]: Ignition finished successfully Nov 24 01:44:09.374948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 01:44:09.436910 ignition[834]: Ignition 2.22.0 Nov 24 01:44:09.436942 ignition[834]: Stage: kargs Nov 24 01:44:09.437160 ignition[834]: no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:09.437179 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:09.438148 ignition[834]: kargs: kargs passed Nov 24 01:44:09.441380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 01:44:09.438218 ignition[834]: Ignition finished successfully Nov 24 01:44:09.444839 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 01:44:09.486882 ignition[840]: Ignition 2.22.0 Nov 24 01:44:09.486943 ignition[840]: Stage: disks Nov 24 01:44:09.487125 ignition[840]: no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:09.487143 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:09.490069 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 01:44:09.488485 ignition[840]: disks: disks passed Nov 24 01:44:09.492005 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 01:44:09.488551 ignition[840]: Ignition finished successfully Nov 24 01:44:09.493288 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 01:44:09.494695 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 01:44:09.496006 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 01:44:09.497415 systemd[1]: Reached target basic.target - Basic System. Nov 24 01:44:09.499946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 01:44:09.530949 systemd-fsck[848]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 24 01:44:09.534842 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 01:44:09.537427 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 01:44:09.660720 kernel: EXT4-fs (vda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 01:44:09.662305 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 01:44:09.663515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 01:44:09.666015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 01:44:09.668376 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 01:44:09.671167 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 01:44:09.679025 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 24 01:44:09.680046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 01:44:09.680089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 01:44:09.685784 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 01:44:09.695816 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Nov 24 01:44:09.695872 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 01:44:09.695901 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 01:44:09.697134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 01:44:09.714559 kernel: BTRFS info (device vda6): turning on async discard Nov 24 01:44:09.714630 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 01:44:09.719476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 01:44:09.778912 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:09.780236 initrd-setup-root[884]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 01:44:09.789766 initrd-setup-root[891]: cut: /sysroot/etc/group: No such file or directory Nov 24 01:44:09.799561 initrd-setup-root[898]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 01:44:09.806479 initrd-setup-root[905]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 01:44:09.923132 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 01:44:09.925933 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 01:44:09.927823 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 01:44:09.957107 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 01:44:09.960840 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 01:44:09.977943 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 01:44:10.021235 ignition[973]: INFO : Ignition 2.22.0 Nov 24 01:44:10.023805 ignition[973]: INFO : Stage: mount Nov 24 01:44:10.023805 ignition[973]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:10.023805 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:10.026180 ignition[973]: INFO : mount: mount passed Nov 24 01:44:10.026180 ignition[973]: INFO : Ignition finished successfully Nov 24 01:44:10.027403 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 01:44:10.256327 systemd-networkd[819]: eth0: Gained IPv6LL Nov 24 01:44:10.810663 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:11.160107 systemd-networkd[819]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9312:24:19ff:fee6:4c4a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9312:24:19ff:fee6:4c4a/64 assigned by NDisc. Nov 24 01:44:11.160123 systemd-networkd[819]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 24 01:44:12.819683 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:16.829685 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:16.836287 coreos-metadata[858]: Nov 24 01:44:16.836 WARN failed to locate config-drive, using the metadata service API instead Nov 24 01:44:16.862400 coreos-metadata[858]: Nov 24 01:44:16.862 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 24 01:44:16.879854 coreos-metadata[858]: Nov 24 01:44:16.879 INFO Fetch successful Nov 24 01:44:16.880896 coreos-metadata[858]: Nov 24 01:44:16.880 INFO wrote hostname srv-7vvyr.gb1.brightbox.com to /sysroot/etc/hostname Nov 24 01:44:16.883755 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 24 01:44:16.884106 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 24 01:44:16.889140 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 01:44:16.918109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 01:44:16.954646 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (989) Nov 24 01:44:16.954731 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 01:44:16.957705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 01:44:16.962875 kernel: BTRFS info (device vda6): turning on async discard Nov 24 01:44:16.962910 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 01:44:16.967091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 01:44:17.003549 ignition[1007]: INFO : Ignition 2.22.0 Nov 24 01:44:17.003549 ignition[1007]: INFO : Stage: files Nov 24 01:44:17.005403 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:17.005403 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:17.005403 ignition[1007]: DEBUG : files: compiled without relabeling support, skipping Nov 24 01:44:17.008008 ignition[1007]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 01:44:17.008008 ignition[1007]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 01:44:17.015577 ignition[1007]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 01:44:17.015577 ignition[1007]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 01:44:17.015577 ignition[1007]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 01:44:17.015577 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 01:44:17.015577 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 01:44:17.009887 unknown[1007]: wrote ssh authorized keys file for user: core Nov 24 01:44:17.196863 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 01:44:17.451638 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 01:44:17.451638 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 01:44:17.454656 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 01:44:17.462263 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 01:44:17.462263 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 01:44:17.462263 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 01:44:17.462263 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 01:44:17.462263 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 01:44:17.462263 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 01:44:17.784384 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 01:44:19.572493 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 01:44:19.572493 ignition[1007]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 01:44:19.575563 ignition[1007]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 01:44:19.576866 ignition[1007]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 01:44:19.576866 ignition[1007]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 01:44:19.576866 ignition[1007]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 01:44:19.576866 ignition[1007]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 01:44:19.582244 ignition[1007]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 01:44:19.582244 ignition[1007]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 01:44:19.582244 ignition[1007]: INFO : files: files passed Nov 24 01:44:19.582244 ignition[1007]: INFO : Ignition finished successfully Nov 24 01:44:19.579937 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 01:44:19.601823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 01:44:19.608813 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 01:44:19.619923 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 01:44:19.620909 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 01:44:19.630343 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 01:44:19.632176 initrd-setup-root-after-ignition[1037]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 01:44:19.634163 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 01:44:19.635625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 01:44:19.637007 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 01:44:19.639078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 01:44:19.691467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 01:44:19.691719 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 01:44:19.693429 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 01:44:19.694896 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 01:44:19.696513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 01:44:19.697720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 01:44:19.742661 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 01:44:19.745395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 01:44:19.769393 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 01:44:19.771333 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 01:44:19.772270 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 01:44:19.773812 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 01:44:19.773979 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 01:44:19.775809 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 01:44:19.776912 systemd[1]: Stopped target basic.target - Basic System. Nov 24 01:44:19.778509 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 01:44:19.779786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 01:44:19.781279 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 01:44:19.782781 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 01:44:19.784304 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 01:44:19.785815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 01:44:19.787461 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 01:44:19.788923 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 01:44:19.790367 systemd[1]: Stopped target swap.target - Swaps. Nov 24 01:44:19.791665 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 01:44:19.791875 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 01:44:19.793537 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 01:44:19.794430 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 01:44:19.796065 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 01:44:19.796427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 01:44:19.797503 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 01:44:19.797740 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 01:44:19.799675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 01:44:19.799934 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 01:44:19.801501 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 01:44:19.801740 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 01:44:19.804151 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 01:44:19.810825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 01:44:19.811033 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 01:44:19.819891 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 01:44:19.821223 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 01:44:19.822445 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 01:44:19.825959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 01:44:19.827022 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 01:44:19.840486 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 01:44:19.841536 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 01:44:19.855075 ignition[1061]: INFO : Ignition 2.22.0 Nov 24 01:44:19.856168 ignition[1061]: INFO : Stage: umount Nov 24 01:44:19.857143 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 01:44:19.857995 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 24 01:44:19.863192 ignition[1061]: INFO : umount: umount passed Nov 24 01:44:19.863979 ignition[1061]: INFO : Ignition finished successfully Nov 24 01:44:19.868258 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 01:44:19.869238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 01:44:19.872355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 01:44:19.874153 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 01:44:19.874775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 01:44:19.875675 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 01:44:19.875743 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 01:44:19.876451 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 01:44:19.876525 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 01:44:19.877797 systemd[1]: Stopped target network.target - Network. Nov 24 01:44:19.878505 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 01:44:19.878602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 01:44:19.879982 systemd[1]: Stopped target paths.target - Path Units. Nov 24 01:44:19.881316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 01:44:19.885750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 01:44:19.886737 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 01:44:19.888163 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 01:44:19.889879 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 01:44:19.889954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 01:44:19.891078 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 01:44:19.891137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 01:44:19.892359 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 01:44:19.892454 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 01:44:19.893687 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 01:44:19.893757 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 01:44:19.895353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 01:44:19.897149 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 01:44:19.901966 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 01:44:19.902195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 01:44:19.902362 systemd-networkd[819]: eth0: DHCPv6 lease lost Nov 24 01:44:19.910413 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 01:44:19.910946 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 01:44:19.911123 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 01:44:19.913875 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 01:44:19.915144 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 01:44:19.916852 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 01:44:19.916940 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 01:44:19.919424 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 01:44:19.920126 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 01:44:19.920196 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 01:44:19.922866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 01:44:19.922944 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 01:44:19.924768 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 01:44:19.924836 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 01:44:19.926823 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 01:44:19.926894 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 01:44:19.935358 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 01:44:19.940850 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 01:44:19.940952 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 01:44:19.951547 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 01:44:19.952732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 01:44:19.955253 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 01:44:19.955979 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 01:44:19.957934 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 01:44:19.958027 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 01:44:19.959509 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 01:44:19.959565 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 01:44:19.960963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 01:44:19.961036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 01:44:19.963049 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 01:44:19.963126 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 01:44:19.964451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 01:44:19.964522 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 01:44:19.967129 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 01:44:19.968961 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 01:44:19.969036 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 01:44:19.971818 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 01:44:19.971893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 01:44:19.975291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 01:44:19.975385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 01:44:19.979500 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 01:44:19.979586 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 01:44:19.980737 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 01:44:19.986093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 01:44:19.986228 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 01:44:20.012280 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 01:44:20.012464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 01:44:20.014458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 01:44:20.015708 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 01:44:20.015825 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 01:44:20.018297 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 01:44:20.040939 systemd[1]: Switching root. Nov 24 01:44:20.088280 systemd-journald[210]: Journal stopped Nov 24 01:44:21.647263 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Nov 24 01:44:21.647416 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 01:44:21.647465 kernel: SELinux: policy capability open_perms=1 Nov 24 01:44:21.647493 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 01:44:21.647526 kernel: SELinux: policy capability always_check_network=0 Nov 24 01:44:21.647546 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 01:44:21.647573 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 01:44:21.647604 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 01:44:21.647705 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 01:44:21.647727 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 01:44:21.647746 kernel: audit: type=1403 audit(1763948660.370:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 01:44:21.647772 systemd[1]: Successfully loaded SELinux policy in 75.755ms. Nov 24 01:44:21.647810 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.967ms. Nov 24 01:44:21.647840 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 01:44:21.647861 systemd[1]: Detected virtualization kvm. Nov 24 01:44:21.647880 systemd[1]: Detected architecture x86-64. Nov 24 01:44:21.647911 systemd[1]: Detected first boot. Nov 24 01:44:21.647949 systemd[1]: Hostname set to . Nov 24 01:44:21.647981 systemd[1]: Initializing machine ID from VM UUID. Nov 24 01:44:21.648002 zram_generator::config[1104]: No configuration found. Nov 24 01:44:21.648030 kernel: Guest personality initialized and is inactive Nov 24 01:44:21.648050 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 01:44:21.648068 kernel: Initialized host personality Nov 24 01:44:21.648097 kernel: NET: Registered PF_VSOCK protocol family Nov 24 01:44:21.648115 systemd[1]: Populated /etc with preset unit settings. Nov 24 01:44:21.648148 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 01:44:21.648170 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 01:44:21.648187 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 01:44:21.648206 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 01:44:21.648233 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 01:44:21.648252 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 01:44:21.648272 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 01:44:21.648290 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 01:44:21.648308 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 01:44:21.648346 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 01:44:21.648367 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 01:44:21.648386 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 01:44:21.648404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 01:44:21.648468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 01:44:21.648490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 01:44:21.648535 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 01:44:21.648555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 01:44:21.648574 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 01:44:21.662655 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 01:44:21.662709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 01:44:21.662732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 01:44:21.662779 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 01:44:21.662817 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 01:44:21.662839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 01:44:21.662859 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 01:44:21.662879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 01:44:21.662907 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 01:44:21.662928 systemd[1]: Reached target slices.target - Slice Units. Nov 24 01:44:21.662955 systemd[1]: Reached target swap.target - Swaps. Nov 24 01:44:21.662985 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 01:44:21.663023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 01:44:21.663044 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 01:44:21.663064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 01:44:21.663083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 01:44:21.663102 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 01:44:21.663130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 01:44:21.663151 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 01:44:21.663184 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 01:44:21.663205 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 01:44:21.663240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:21.663262 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 01:44:21.663287 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 01:44:21.663306 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 01:44:21.663327 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 01:44:21.663346 systemd[1]: Reached target machines.target - Containers. Nov 24 01:44:21.663366 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 01:44:21.663386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 01:44:21.663421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 01:44:21.663444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 01:44:21.663464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 01:44:21.663483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 01:44:21.663511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 01:44:21.663532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 01:44:21.663560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 01:44:21.663608 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 01:44:21.663657 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 01:44:21.663697 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 01:44:21.663719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 01:44:21.663738 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 01:44:21.663758 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 01:44:21.663778 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 01:44:21.663797 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 01:44:21.663816 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 01:44:21.663860 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 01:44:21.663895 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 01:44:21.663928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 01:44:21.663969 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 01:44:21.664003 systemd[1]: Stopped verity-setup.service. Nov 24 01:44:21.664023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:21.664043 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 01:44:21.664079 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 01:44:21.664101 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 01:44:21.664129 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 01:44:21.664164 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 01:44:21.664186 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 01:44:21.664225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 01:44:21.664246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 01:44:21.664265 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 01:44:21.664293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 01:44:21.664314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 01:44:21.664334 kernel: loop: module loaded Nov 24 01:44:21.664354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 01:44:21.664393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 01:44:21.664435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 01:44:21.664456 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 01:44:21.664487 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 01:44:21.664515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 01:44:21.664580 systemd-journald[1198]: Collecting audit messages is disabled. Nov 24 01:44:21.669692 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 01:44:21.669722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 01:44:21.669762 kernel: fuse: init (API version 7.41) Nov 24 01:44:21.669785 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 01:44:21.669833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 01:44:21.669856 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 01:44:21.669896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 01:44:21.669919 systemd-journald[1198]: Journal started Nov 24 01:44:21.669959 systemd-journald[1198]: Runtime Journal (/run/log/journal/e9a01c46b50e4c729222394e62ccab03) is 4.7M, max 37.8M, 33.1M free. Nov 24 01:44:21.205340 systemd[1]: Queued start job for default target multi-user.target. Nov 24 01:44:21.218990 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 01:44:21.219792 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 01:44:21.677681 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 01:44:21.685670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 01:44:21.696648 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 01:44:21.699672 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 01:44:21.706173 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 01:44:21.707932 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 01:44:21.708307 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 01:44:21.709351 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 01:44:21.709653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 01:44:21.711023 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 01:44:21.713366 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 01:44:21.715190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 01:44:21.743308 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 01:44:21.751646 kernel: loop0: detected capacity change from 0 to 8 Nov 24 01:44:21.748810 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 01:44:21.750386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 01:44:21.754995 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 01:44:21.756249 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 01:44:21.757850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 01:44:21.769949 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 01:44:21.799125 systemd-journald[1198]: Time spent on flushing to /var/log/journal/e9a01c46b50e4c729222394e62ccab03 is 115.336ms for 1161 entries. Nov 24 01:44:21.799125 systemd-journald[1198]: System Journal (/var/log/journal/e9a01c46b50e4c729222394e62ccab03) is 8M, max 584.8M, 576.8M free. Nov 24 01:44:22.006635 systemd-journald[1198]: Received client request to flush runtime journal. Nov 24 01:44:22.006732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 01:44:22.006776 kernel: ACPI: bus type drm_connector registered Nov 24 01:44:22.006809 kernel: loop1: detected capacity change from 0 to 128560 Nov 24 01:44:22.006841 kernel: loop2: detected capacity change from 0 to 110984 Nov 24 01:44:21.826932 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 01:44:21.857293 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 01:44:21.861857 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 01:44:21.862761 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 01:44:21.971569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 01:44:21.977487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 01:44:22.015189 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 01:44:22.033684 kernel: loop3: detected capacity change from 0 to 229808 Nov 24 01:44:22.036402 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 24 01:44:22.036428 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 24 01:44:22.048718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 01:44:22.145498 kernel: loop4: detected capacity change from 0 to 8 Nov 24 01:44:22.159690 kernel: loop5: detected capacity change from 0 to 128560 Nov 24 01:44:22.231643 kernel: loop6: detected capacity change from 0 to 110984 Nov 24 01:44:22.231595 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 01:44:22.241991 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 01:44:22.262311 kernel: loop7: detected capacity change from 0 to 229808 Nov 24 01:44:22.269157 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 01:44:22.282141 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 24 01:44:22.284378 (sd-merge)[1263]: Merged extensions into '/usr'. Nov 24 01:44:22.293824 systemd[1]: Reload requested from client PID 1222 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 01:44:22.293864 systemd[1]: Reloading... Nov 24 01:44:22.514646 zram_generator::config[1288]: No configuration found. Nov 24 01:44:22.665128 ldconfig[1215]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 01:44:22.889168 systemd[1]: Reloading finished in 594 ms. Nov 24 01:44:22.907953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 01:44:22.912241 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 01:44:22.913664 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 01:44:22.915471 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 01:44:22.936866 systemd[1]: Starting ensure-sysext.service... Nov 24 01:44:22.940777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 01:44:22.946899 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 01:44:22.971113 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Nov 24 01:44:22.971149 systemd[1]: Reloading... Nov 24 01:44:22.996061 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 01:44:22.996550 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 01:44:22.997077 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 01:44:22.997503 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 01:44:22.999281 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 01:44:22.999799 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Nov 24 01:44:23.000028 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Nov 24 01:44:23.005908 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 01:44:23.007687 systemd-tmpfiles[1351]: Skipping /boot Nov 24 01:44:23.015320 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Nov 24 01:44:23.047664 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 01:44:23.048661 systemd-tmpfiles[1351]: Skipping /boot Nov 24 01:44:23.097667 zram_generator::config[1377]: No configuration found. Nov 24 01:44:23.538647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Nov 24 01:44:23.554648 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 01:44:23.570776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 01:44:23.571864 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 01:44:23.572395 systemd[1]: Reloading finished in 600 ms. Nov 24 01:44:23.586643 kernel: ACPI: button: Power Button [PWRF] Nov 24 01:44:23.587758 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 01:44:23.589297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 01:44:23.664731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:23.676726 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 01:44:23.717301 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 01:44:23.680248 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 01:44:23.692006 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 01:44:23.693013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 01:44:23.706050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 01:44:23.709234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 01:44:23.718425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 01:44:23.720924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 01:44:23.725341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 01:44:23.726122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 01:44:23.728864 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 01:44:23.735046 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 01:44:23.745048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 01:44:23.751062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 01:44:23.751893 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:23.759079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:23.759343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 01:44:23.759589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 01:44:23.760776 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 01:44:23.760906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:23.769246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:23.769658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 01:44:23.774721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 01:44:23.775600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 01:44:23.775770 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 01:44:23.775958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 01:44:23.785518 systemd[1]: Finished ensure-sysext.service. Nov 24 01:44:23.801953 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 01:44:23.847918 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 01:44:23.850722 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 01:44:23.852926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 01:44:23.853224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 01:44:23.855275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 01:44:23.855551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 01:44:23.857344 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 01:44:23.858180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 01:44:23.870749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 01:44:23.870965 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 01:44:23.871113 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 01:44:23.874711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 01:44:23.879769 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 01:44:23.892390 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 01:44:23.893462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 01:44:23.907686 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 01:44:23.914858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 01:44:23.945734 augenrules[1521]: No rules Nov 24 01:44:23.947477 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 01:44:23.948710 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 01:44:23.954692 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 01:44:23.974025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 01:44:24.024485 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 01:44:24.227491 systemd-networkd[1484]: lo: Link UP Nov 24 01:44:24.228268 systemd-networkd[1484]: lo: Gained carrier Nov 24 01:44:24.235189 systemd-networkd[1484]: Enumeration completed Nov 24 01:44:24.235436 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 01:44:24.235861 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 01:44:24.235868 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 01:44:24.238566 systemd-networkd[1484]: eth0: Link UP Nov 24 01:44:24.239056 systemd-networkd[1484]: eth0: Gained carrier Nov 24 01:44:24.239669 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 01:44:24.246372 systemd-resolved[1486]: Positive Trust Anchors: Nov 24 01:44:24.246397 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 01:44:24.246446 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 01:44:24.259810 systemd-networkd[1484]: eth0: DHCPv4 address 10.230.76.74/30, gateway 10.230.76.73 acquired from 10.230.76.73 Nov 24 01:44:24.263716 systemd-timesyncd[1494]: Network configuration changed, trying to establish connection. Nov 24 01:44:24.265314 systemd-resolved[1486]: Using system hostname 'srv-7vvyr.gb1.brightbox.com'. Nov 24 01:44:24.317678 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 01:44:24.320181 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 01:44:24.321546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 01:44:24.323079 systemd[1]: Reached target network.target - Network. Nov 24 01:44:24.324202 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 01:44:24.324987 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 01:44:24.325791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 01:44:24.326548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 01:44:24.327286 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 01:44:24.328001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 01:44:24.328794 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 01:44:24.328841 systemd[1]: Reached target paths.target - Path Units. Nov 24 01:44:24.329434 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 01:44:24.330372 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 01:44:24.331191 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 01:44:24.331963 systemd[1]: Reached target timers.target - Timer Units. Nov 24 01:44:24.334185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 01:44:24.336847 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 01:44:24.341222 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 01:44:24.342325 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 01:44:24.343189 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 01:44:24.346583 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 01:44:24.347757 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 01:44:24.350463 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 01:44:24.352747 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 01:44:24.355434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 01:44:24.357188 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 01:44:24.357868 systemd[1]: Reached target basic.target - Basic System. Nov 24 01:44:24.359813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 01:44:24.359875 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 01:44:24.362723 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 01:44:24.372800 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 01:44:24.376888 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 01:44:24.380975 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 01:44:24.385354 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 01:44:24.392962 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 01:44:24.394722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 01:44:24.398640 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:24.401637 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 01:44:24.406206 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 01:44:24.409945 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 01:44:24.414283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 01:44:24.420080 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 01:44:24.432991 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 01:44:24.435067 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 01:44:24.435867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 01:44:24.440322 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 01:44:24.446687 jq[1552]: false Nov 24 01:44:24.449299 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 01:44:24.458737 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 01:44:24.468364 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 01:44:24.468720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 01:44:24.489112 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 01:44:24.489438 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 01:44:24.493988 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Nov 24 01:44:24.493996 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Nov 24 01:44:24.505037 jq[1564]: true Nov 24 01:44:24.533413 oslogin_cache_refresh[1555]: Failure getting users, quitting Nov 24 01:44:24.533005 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 01:44:24.543575 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Nov 24 01:44:24.543575 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 01:44:24.543575 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Nov 24 01:44:24.543575 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Nov 24 01:44:24.543575 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 01:44:24.533437 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 01:44:24.534091 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 01:44:24.544456 extend-filesystems[1553]: Found /dev/vda6 Nov 24 01:44:24.533517 oslogin_cache_refresh[1555]: Refreshing group entry cache Nov 24 01:44:24.534838 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 01:44:24.538476 oslogin_cache_refresh[1555]: Failure getting groups, quitting Nov 24 01:44:24.542767 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 01:44:24.538492 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 01:44:24.563158 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 01:44:24.567757 jq[1585]: true Nov 24 01:44:24.568028 update_engine[1562]: I20251124 01:44:24.567468 1562 main.cc:92] Flatcar Update Engine starting Nov 24 01:44:24.563484 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 01:44:24.573648 extend-filesystems[1553]: Found /dev/vda9 Nov 24 01:44:24.598493 tar[1573]: linux-amd64/LICENSE Nov 24 01:44:24.598493 tar[1573]: linux-amd64/helm Nov 24 01:44:24.616336 dbus-daemon[1550]: [system] SELinux support is enabled Nov 24 01:44:24.616837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 01:44:24.624486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 01:44:24.624546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 01:44:24.626910 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 01:44:24.626953 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 01:44:24.637080 dbus-daemon[1550]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1484 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 24 01:44:24.647277 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 01:44:24.655760 extend-filesystems[1553]: Checking size of /dev/vda9 Nov 24 01:44:24.661503 update_engine[1562]: I20251124 01:44:24.657961 1562 update_check_scheduler.cc:74] Next update check in 8m18s Nov 24 01:44:24.660726 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 24 01:44:24.662146 systemd[1]: Started update-engine.service - Update Engine. Nov 24 01:44:24.718532 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 01:44:24.721247 systemd-timesyncd[1494]: Contacted time server 77.68.81.77:123 (0.flatcar.pool.ntp.org). Nov 24 01:44:24.723594 systemd-timesyncd[1494]: Initial clock synchronization to Mon 2025-11-24 01:44:24.628858 UTC. Nov 24 01:44:24.765710 extend-filesystems[1553]: Resized partition /dev/vda9 Nov 24 01:44:24.780363 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 01:44:24.798635 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Nov 24 01:44:24.799018 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 01:44:24.808642 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 24 01:44:24.806542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 01:44:24.812124 systemd[1]: Starting sshkeys.service... Nov 24 01:44:24.921829 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 01:44:24.954784 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 01:44:25.045460 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:25.105007 systemd-logind[1561]: Watching system buttons on /dev/input/event3 (Power Button) Nov 24 01:44:25.108470 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 01:44:25.109774 systemd-logind[1561]: New seat seat0. Nov 24 01:44:25.112284 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 01:44:25.123262 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 24 01:44:25.138587 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 24 01:44:25.141555 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 01:44:25.141179 dbus-daemon[1550]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1596 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 24 01:44:25.164517 systemd[1]: Starting polkit.service - Authorization Manager... Nov 24 01:44:25.315720 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 24 01:44:25.360909 systemd-networkd[1484]: eth0: Gained IPv6LL Nov 24 01:44:25.365316 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 01:44:25.371143 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 01:44:25.383297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:44:25.394889 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 01:44:25.394889 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 24 01:44:25.394889 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 24 01:44:25.406558 containerd[1582]: time="2025-11-24T01:44:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 01:44:25.406558 containerd[1582]: time="2025-11-24T01:44:25.384563362Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 01:44:25.388451 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 01:44:25.407046 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Nov 24 01:44:25.391059 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 01:44:25.392682 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.419864760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="35.727µs" Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.419913685Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.419958276Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420297362Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420361793Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420434741Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420593271Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420645486Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420959613Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.420981676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.421006465Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422281 containerd[1582]: time="2025-11-24T01:44:25.421021324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 01:44:25.422844 containerd[1582]: time="2025-11-24T01:44:25.421214566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 01:44:25.438064 containerd[1582]: time="2025-11-24T01:44:25.421589422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 01:44:25.438064 containerd[1582]: time="2025-11-24T01:44:25.436671483Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 01:44:25.438064 containerd[1582]: time="2025-11-24T01:44:25.436705492Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 01:44:25.438064 containerd[1582]: time="2025-11-24T01:44:25.436770713Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 01:44:25.438064 containerd[1582]: time="2025-11-24T01:44:25.437096038Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 01:44:25.438064 containerd[1582]: time="2025-11-24T01:44:25.437206365Z" level=info msg="metadata content store policy set" policy=shared Nov 24 01:44:25.446802 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453662400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453761536Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453828605Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453855701Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453884760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453911613Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453947373Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453967254Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.453984023Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.454024964Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.454043860Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.454063532Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.454294629Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 01:44:25.457633 containerd[1582]: time="2025-11-24T01:44:25.454345889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454371158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454390757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454411104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454432949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454450679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454475152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454494135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454512431Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454528361Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454645354Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454688990Z" level=info msg="Start snapshots syncer" Nov 24 01:44:25.458254 containerd[1582]: time="2025-11-24T01:44:25.454735409Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 01:44:25.458703 containerd[1582]: time="2025-11-24T01:44:25.455167064Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 01:44:25.458703 containerd[1582]: time="2025-11-24T01:44:25.455259155Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 01:44:25.459015 containerd[1582]: time="2025-11-24T01:44:25.455353053Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 01:44:25.459015 containerd[1582]: time="2025-11-24T01:44:25.455530848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 01:44:25.459015 containerd[1582]: time="2025-11-24T01:44:25.455561181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 01:44:25.459015 containerd[1582]: time="2025-11-24T01:44:25.455586802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 01:44:25.471708 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 01:44:25.474570 containerd[1582]: time="2025-11-24T01:44:25.474000858Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 01:44:25.476440 containerd[1582]: time="2025-11-24T01:44:25.474869310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 01:44:25.476440 containerd[1582]: time="2025-11-24T01:44:25.474979965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 01:44:25.476440 containerd[1582]: time="2025-11-24T01:44:25.475097902Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 01:44:25.476440 containerd[1582]: time="2025-11-24T01:44:25.475283992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 01:44:25.476440 containerd[1582]: time="2025-11-24T01:44:25.475351322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 01:44:25.476440 containerd[1582]: time="2025-11-24T01:44:25.475416670Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 01:44:25.477735 containerd[1582]: time="2025-11-24T01:44:25.475595541Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 01:44:25.477844 containerd[1582]: time="2025-11-24T01:44:25.477816705Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478641443Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478674301Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478690101Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478708209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478735988Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478790451Z" level=info msg="runtime interface created" Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478803013Z" level=info msg="created NRI interface" Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478837531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478866942Z" level=info msg="Connect containerd service" Nov 24 01:44:25.480692 containerd[1582]: time="2025-11-24T01:44:25.478932512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 01:44:25.488632 containerd[1582]: time="2025-11-24T01:44:25.483212143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 01:44:25.536135 polkitd[1632]: Started polkitd version 126 Nov 24 01:44:25.565685 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d Nov 24 01:44:25.566398 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d Nov 24 01:44:25.568742 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 01:44:25.569109 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 24 01:44:25.569150 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 01:44:25.569230 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 24 01:44:25.607831 polkitd[1632]: Finished loading, compiling and executing 2 rules Nov 24 01:44:25.608702 systemd[1]: Started polkit.service - Authorization Manager. Nov 24 01:44:25.609874 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 24 01:44:25.611049 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 24 01:44:25.672175 systemd-hostnamed[1596]: Hostname set to (static) Nov 24 01:44:25.816322 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 01:44:25.881520 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 01:44:25.891319 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 01:44:25.895773 systemd[1]: Started sshd@0-10.230.76.74:22-139.178.68.195:40178.service - OpenSSH per-connection server daemon (139.178.68.195:40178). Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.939556940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.939697767Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.939744491Z" level=info msg="Start subscribing containerd event" Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.939796305Z" level=info msg="Start recovering state" Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.940039518Z" level=info msg="Start event monitor" Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.940067717Z" level=info msg="Start cni network conf syncer for default" Nov 24 01:44:25.940103 containerd[1582]: time="2025-11-24T01:44:25.940088795Z" level=info msg="Start streaming server" Nov 24 01:44:25.941200 containerd[1582]: time="2025-11-24T01:44:25.940140297Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 01:44:25.941200 containerd[1582]: time="2025-11-24T01:44:25.940159867Z" level=info msg="runtime interface starting up..." Nov 24 01:44:25.941200 containerd[1582]: time="2025-11-24T01:44:25.940174892Z" level=info msg="starting plugins..." Nov 24 01:44:25.941200 containerd[1582]: time="2025-11-24T01:44:25.940219514Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 01:44:25.941200 containerd[1582]: time="2025-11-24T01:44:25.940406150Z" level=info msg="containerd successfully booted in 0.559300s" Nov 24 01:44:25.940489 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 01:44:25.947074 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 01:44:25.947392 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 01:44:25.957343 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 01:44:26.016241 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 01:44:26.026000 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 01:44:26.031686 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 01:44:26.033138 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 01:44:26.149640 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:26.163804 systemd-networkd[1484]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9312:24:19ff:fee6:4c4a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9312:24:19ff:fee6:4c4a/64 assigned by NDisc. Nov 24 01:44:26.163816 systemd-networkd[1484]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 24 01:44:26.316740 tar[1573]: linux-amd64/README.md Nov 24 01:44:26.338084 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 01:44:26.929808 sshd[1679]: Accepted publickey for core from 139.178.68.195 port 40178 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:26.932775 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:26.947387 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 01:44:26.950460 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 01:44:26.970204 systemd-logind[1561]: New session 1 of user core. Nov 24 01:44:26.995250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 01:44:27.001044 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 01:44:27.019411 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 01:44:27.025131 systemd-logind[1561]: New session c1 of user core. Nov 24 01:44:27.214198 systemd[1696]: Queued start job for default target default.target. Nov 24 01:44:27.220259 systemd[1696]: Created slice app.slice - User Application Slice. Nov 24 01:44:27.220299 systemd[1696]: Reached target paths.target - Paths. Nov 24 01:44:27.220472 systemd[1696]: Reached target timers.target - Timers. Nov 24 01:44:27.224712 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 01:44:27.258495 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 01:44:27.258985 systemd[1696]: Reached target sockets.target - Sockets. Nov 24 01:44:27.259283 systemd[1696]: Reached target basic.target - Basic System. Nov 24 01:44:27.259555 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 01:44:27.260490 systemd[1696]: Reached target default.target - Main User Target. Nov 24 01:44:27.261029 systemd[1696]: Startup finished in 224ms. Nov 24 01:44:27.272000 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 01:44:27.313163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:44:27.333294 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 01:44:27.524014 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:27.935801 systemd[1]: Started sshd@1-10.230.76.74:22-139.178.68.195:40182.service - OpenSSH per-connection server daemon (139.178.68.195:40182). Nov 24 01:44:28.048666 kubelet[1710]: E1124 01:44:28.047701 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 01:44:28.053017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 01:44:28.053263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 01:44:28.054199 systemd[1]: kubelet.service: Consumed 1.626s CPU time, 268.9M memory peak. Nov 24 01:44:28.184650 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:28.926643 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 40182 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:28.928353 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:28.935917 systemd-logind[1561]: New session 2 of user core. Nov 24 01:44:28.942867 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 01:44:29.603588 sshd[1724]: Connection closed by 139.178.68.195 port 40182 Nov 24 01:44:29.603347 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:29.610076 systemd[1]: sshd@1-10.230.76.74:22-139.178.68.195:40182.service: Deactivated successfully. Nov 24 01:44:29.612322 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 01:44:29.613768 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Nov 24 01:44:29.615662 systemd-logind[1561]: Removed session 2. Nov 24 01:44:29.757642 systemd[1]: Started sshd@2-10.230.76.74:22-139.178.68.195:40198.service - OpenSSH per-connection server daemon (139.178.68.195:40198). Nov 24 01:44:30.681425 sshd[1730]: Accepted publickey for core from 139.178.68.195 port 40198 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:30.683211 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:30.692308 systemd-logind[1561]: New session 3 of user core. Nov 24 01:44:30.714083 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 01:44:31.153254 login[1687]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 01:44:31.159289 login[1688]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 01:44:31.162472 systemd-logind[1561]: New session 5 of user core. Nov 24 01:44:31.171959 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 01:44:31.178019 systemd-logind[1561]: New session 4 of user core. Nov 24 01:44:31.189078 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 01:44:31.318862 sshd[1733]: Connection closed by 139.178.68.195 port 40198 Nov 24 01:44:31.319961 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:31.325837 systemd[1]: sshd@2-10.230.76.74:22-139.178.68.195:40198.service: Deactivated successfully. Nov 24 01:44:31.328499 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 01:44:31.330382 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Nov 24 01:44:31.332283 systemd-logind[1561]: Removed session 3. Nov 24 01:44:31.540662 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:31.554362 coreos-metadata[1549]: Nov 24 01:44:31.554 WARN failed to locate config-drive, using the metadata service API instead Nov 24 01:44:31.579988 coreos-metadata[1549]: Nov 24 01:44:31.579 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 24 01:44:31.588941 coreos-metadata[1549]: Nov 24 01:44:31.588 INFO Fetch failed with 404: resource not found Nov 24 01:44:31.588941 coreos-metadata[1549]: Nov 24 01:44:31.588 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 24 01:44:31.589700 coreos-metadata[1549]: Nov 24 01:44:31.589 INFO Fetch successful Nov 24 01:44:31.589866 coreos-metadata[1549]: Nov 24 01:44:31.589 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 24 01:44:31.607955 coreos-metadata[1549]: Nov 24 01:44:31.607 INFO Fetch successful Nov 24 01:44:31.608221 coreos-metadata[1549]: Nov 24 01:44:31.608 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 24 01:44:31.630940 coreos-metadata[1549]: Nov 24 01:44:31.630 INFO Fetch successful Nov 24 01:44:31.631192 coreos-metadata[1549]: Nov 24 01:44:31.631 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 24 01:44:31.645748 coreos-metadata[1549]: Nov 24 01:44:31.645 INFO Fetch successful Nov 24 01:44:31.646004 coreos-metadata[1549]: Nov 24 01:44:31.645 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 24 01:44:31.664186 coreos-metadata[1549]: Nov 24 01:44:31.664 INFO Fetch successful Nov 24 01:44:31.702151 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 01:44:31.703435 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 01:44:32.198679 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 24 01:44:32.208240 coreos-metadata[1622]: Nov 24 01:44:32.208 WARN failed to locate config-drive, using the metadata service API instead Nov 24 01:44:32.232347 coreos-metadata[1622]: Nov 24 01:44:32.232 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 24 01:44:32.260738 coreos-metadata[1622]: Nov 24 01:44:32.260 INFO Fetch successful Nov 24 01:44:32.260940 coreos-metadata[1622]: Nov 24 01:44:32.260 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 24 01:44:32.290027 coreos-metadata[1622]: Nov 24 01:44:32.289 INFO Fetch successful Nov 24 01:44:32.295466 unknown[1622]: wrote ssh authorized keys file for user: core Nov 24 01:44:32.323652 update-ssh-keys[1773]: Updated "/home/core/.ssh/authorized_keys" Nov 24 01:44:32.325989 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 01:44:32.329946 systemd[1]: Finished sshkeys.service. Nov 24 01:44:32.332333 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 01:44:32.335778 systemd[1]: Startup finished in 3.607s (kernel) + 14.704s (initrd) + 12.038s (userspace) = 30.351s. Nov 24 01:44:38.153610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 01:44:38.156243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:44:38.377713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:44:38.390583 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 01:44:38.459519 kubelet[1784]: E1124 01:44:38.459324 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 01:44:38.465271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 01:44:38.465536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 01:44:38.466378 systemd[1]: kubelet.service: Consumed 258ms CPU time, 110M memory peak. Nov 24 01:44:41.449702 systemd[1]: Started sshd@3-10.230.76.74:22-139.178.68.195:42036.service - OpenSSH per-connection server daemon (139.178.68.195:42036). Nov 24 01:44:42.371081 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 42036 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:42.372831 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:42.379830 systemd-logind[1561]: New session 6 of user core. Nov 24 01:44:42.387950 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 01:44:42.991443 sshd[1794]: Connection closed by 139.178.68.195 port 42036 Nov 24 01:44:42.992298 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:42.997821 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Nov 24 01:44:42.998402 systemd[1]: sshd@3-10.230.76.74:22-139.178.68.195:42036.service: Deactivated successfully. Nov 24 01:44:43.001085 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 01:44:43.003434 systemd-logind[1561]: Removed session 6. Nov 24 01:44:43.154035 systemd[1]: Started sshd@4-10.230.76.74:22-139.178.68.195:42050.service - OpenSSH per-connection server daemon (139.178.68.195:42050). Nov 24 01:44:44.072779 sshd[1800]: Accepted publickey for core from 139.178.68.195 port 42050 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:44.074498 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:44.082647 systemd-logind[1561]: New session 7 of user core. Nov 24 01:44:44.090003 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 01:44:44.697658 sshd[1803]: Connection closed by 139.178.68.195 port 42050 Nov 24 01:44:44.696645 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:44.702018 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Nov 24 01:44:44.703183 systemd[1]: sshd@4-10.230.76.74:22-139.178.68.195:42050.service: Deactivated successfully. Nov 24 01:44:44.705745 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 01:44:44.708176 systemd-logind[1561]: Removed session 7. Nov 24 01:44:44.857960 systemd[1]: Started sshd@5-10.230.76.74:22-139.178.68.195:42066.service - OpenSSH per-connection server daemon (139.178.68.195:42066). Nov 24 01:44:45.783785 sshd[1809]: Accepted publickey for core from 139.178.68.195 port 42066 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:45.785743 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:45.794767 systemd-logind[1561]: New session 8 of user core. Nov 24 01:44:45.801994 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 01:44:46.410961 sshd[1812]: Connection closed by 139.178.68.195 port 42066 Nov 24 01:44:46.411869 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:46.418490 systemd[1]: sshd@5-10.230.76.74:22-139.178.68.195:42066.service: Deactivated successfully. Nov 24 01:44:46.421850 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 01:44:46.423327 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Nov 24 01:44:46.425964 systemd-logind[1561]: Removed session 8. Nov 24 01:44:46.593947 systemd[1]: Started sshd@6-10.230.76.74:22-139.178.68.195:42078.service - OpenSSH per-connection server daemon (139.178.68.195:42078). Nov 24 01:44:47.587995 sshd[1818]: Accepted publickey for core from 139.178.68.195 port 42078 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:47.589711 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:47.598427 systemd-logind[1561]: New session 9 of user core. Nov 24 01:44:47.607269 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 01:44:48.127131 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 01:44:48.127645 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 01:44:48.146445 sudo[1822]: pam_unix(sudo:session): session closed for user root Nov 24 01:44:48.304775 sshd[1821]: Connection closed by 139.178.68.195 port 42078 Nov 24 01:44:48.305879 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:48.312188 systemd[1]: sshd@6-10.230.76.74:22-139.178.68.195:42078.service: Deactivated successfully. Nov 24 01:44:48.315092 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 01:44:48.316496 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Nov 24 01:44:48.318895 systemd-logind[1561]: Removed session 9. Nov 24 01:44:48.454356 systemd[1]: Started sshd@7-10.230.76.74:22-139.178.68.195:42090.service - OpenSSH per-connection server daemon (139.178.68.195:42090). Nov 24 01:44:48.471265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 01:44:48.475718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:44:48.824699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:44:48.840289 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 01:44:48.919860 kubelet[1839]: E1124 01:44:48.919749 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 01:44:48.923241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 01:44:48.923676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 01:44:48.924536 systemd[1]: kubelet.service: Consumed 393ms CPU time, 110.6M memory peak. Nov 24 01:44:49.382479 sshd[1828]: Accepted publickey for core from 139.178.68.195 port 42090 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:49.385030 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:49.391582 systemd-logind[1561]: New session 10 of user core. Nov 24 01:44:49.403030 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 01:44:49.869797 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 01:44:49.870984 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 01:44:49.879225 sudo[1847]: pam_unix(sudo:session): session closed for user root Nov 24 01:44:49.888062 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 01:44:49.888708 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 01:44:49.903269 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 01:44:49.959428 augenrules[1869]: No rules Nov 24 01:44:49.960416 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 01:44:49.960826 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 01:44:49.962825 sudo[1846]: pam_unix(sudo:session): session closed for user root Nov 24 01:44:50.109689 sshd[1845]: Connection closed by 139.178.68.195 port 42090 Nov 24 01:44:50.110179 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Nov 24 01:44:50.115320 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Nov 24 01:44:50.115723 systemd[1]: sshd@7-10.230.76.74:22-139.178.68.195:42090.service: Deactivated successfully. Nov 24 01:44:50.118056 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 01:44:50.121410 systemd-logind[1561]: Removed session 10. Nov 24 01:44:50.271341 systemd[1]: Started sshd@8-10.230.76.74:22-139.178.68.195:42106.service - OpenSSH per-connection server daemon (139.178.68.195:42106). Nov 24 01:44:51.194436 sshd[1878]: Accepted publickey for core from 139.178.68.195 port 42106 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:44:51.196127 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:44:51.204370 systemd-logind[1561]: New session 11 of user core. Nov 24 01:44:51.211080 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 01:44:51.677975 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 01:44:51.678403 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 01:44:52.351087 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 01:44:52.373401 (dockerd)[1899]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 01:44:52.899200 dockerd[1899]: time="2025-11-24T01:44:52.898676512Z" level=info msg="Starting up" Nov 24 01:44:52.902631 dockerd[1899]: time="2025-11-24T01:44:52.901824682Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 01:44:52.931672 dockerd[1899]: time="2025-11-24T01:44:52.931547240Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 01:44:53.009089 dockerd[1899]: time="2025-11-24T01:44:53.009016782Z" level=info msg="Loading containers: start." Nov 24 01:44:53.041247 kernel: Initializing XFRM netlink socket Nov 24 01:44:53.409030 systemd-networkd[1484]: docker0: Link UP Nov 24 01:44:53.414538 dockerd[1899]: time="2025-11-24T01:44:53.414403968Z" level=info msg="Loading containers: done." Nov 24 01:44:53.439768 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3780471343-merged.mount: Deactivated successfully. Nov 24 01:44:53.462484 dockerd[1899]: time="2025-11-24T01:44:53.462392859Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 01:44:53.462679 dockerd[1899]: time="2025-11-24T01:44:53.462581331Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 01:44:53.462839 dockerd[1899]: time="2025-11-24T01:44:53.462794337Z" level=info msg="Initializing buildkit" Nov 24 01:44:53.492979 dockerd[1899]: time="2025-11-24T01:44:53.492910149Z" level=info msg="Completed buildkit initialization" Nov 24 01:44:53.506156 dockerd[1899]: time="2025-11-24T01:44:53.506058239Z" level=info msg="Daemon has completed initialization" Nov 24 01:44:53.506576 dockerd[1899]: time="2025-11-24T01:44:53.506424234Z" level=info msg="API listen on /run/docker.sock" Nov 24 01:44:53.506535 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 01:44:54.736231 containerd[1582]: time="2025-11-24T01:44:54.735166458Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 01:44:55.809906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881354360.mount: Deactivated successfully. Nov 24 01:44:56.198555 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 24 01:44:58.209329 containerd[1582]: time="2025-11-24T01:44:58.209257169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:44:58.210816 containerd[1582]: time="2025-11-24T01:44:58.210772353Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113221" Nov 24 01:44:58.212750 containerd[1582]: time="2025-11-24T01:44:58.211607844Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:44:58.215821 containerd[1582]: time="2025-11-24T01:44:58.215759108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:44:58.217094 containerd[1582]: time="2025-11-24T01:44:58.217060024Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 3.481770327s" Nov 24 01:44:58.217278 containerd[1582]: time="2025-11-24T01:44:58.217248869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 01:44:58.218290 containerd[1582]: time="2025-11-24T01:44:58.218258017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 01:44:59.153903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 24 01:44:59.157765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:44:59.527835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:44:59.538123 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 01:44:59.632632 kubelet[2188]: E1124 01:44:59.632034 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 01:44:59.634390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 01:44:59.634637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 01:44:59.635166 systemd[1]: kubelet.service: Consumed 278ms CPU time, 110.1M memory peak. Nov 24 01:45:00.833236 containerd[1582]: time="2025-11-24T01:45:00.832310196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:00.834990 containerd[1582]: time="2025-11-24T01:45:00.834937170Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018115" Nov 24 01:45:00.836745 containerd[1582]: time="2025-11-24T01:45:00.836660824Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:00.847675 containerd[1582]: time="2025-11-24T01:45:00.847565260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:00.849173 containerd[1582]: time="2025-11-24T01:45:00.849002786Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 2.630700501s" Nov 24 01:45:00.849173 containerd[1582]: time="2025-11-24T01:45:00.849049239Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 01:45:00.849737 containerd[1582]: time="2025-11-24T01:45:00.849696533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 01:45:03.187329 containerd[1582]: time="2025-11-24T01:45:03.187263572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:03.188958 containerd[1582]: time="2025-11-24T01:45:03.188734951Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156490" Nov 24 01:45:03.190168 containerd[1582]: time="2025-11-24T01:45:03.190133193Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:03.194235 containerd[1582]: time="2025-11-24T01:45:03.193595727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:03.194955 containerd[1582]: time="2025-11-24T01:45:03.194908151Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 2.345073768s" Nov 24 01:45:03.194955 containerd[1582]: time="2025-11-24T01:45:03.194951548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 01:45:03.196505 containerd[1582]: time="2025-11-24T01:45:03.196475387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 01:45:05.161946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050807233.mount: Deactivated successfully. Nov 24 01:45:06.174410 containerd[1582]: time="2025-11-24T01:45:06.173262186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:06.174410 containerd[1582]: time="2025-11-24T01:45:06.174354632Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929146" Nov 24 01:45:06.175799 containerd[1582]: time="2025-11-24T01:45:06.175760496Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:06.178688 containerd[1582]: time="2025-11-24T01:45:06.177924571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:06.180592 containerd[1582]: time="2025-11-24T01:45:06.180542599Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 2.983925864s" Nov 24 01:45:06.180745 containerd[1582]: time="2025-11-24T01:45:06.180718145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 01:45:06.183891 containerd[1582]: time="2025-11-24T01:45:06.183844040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 01:45:06.921422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027332893.mount: Deactivated successfully. Nov 24 01:45:09.665693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 24 01:45:09.670895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:45:09.923075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:09.937721 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 01:45:09.998208 update_engine[1562]: I20251124 01:45:09.997880 1562 update_attempter.cc:509] Updating boot flags... Nov 24 01:45:10.014566 containerd[1582]: time="2025-11-24T01:45:10.014217585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:10.019342 containerd[1582]: time="2025-11-24T01:45:10.018991090Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Nov 24 01:45:10.024821 containerd[1582]: time="2025-11-24T01:45:10.024749121Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:10.038232 containerd[1582]: time="2025-11-24T01:45:10.035687420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:10.038232 containerd[1582]: time="2025-11-24T01:45:10.037246121Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.853159794s" Nov 24 01:45:10.038232 containerd[1582]: time="2025-11-24T01:45:10.037291883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 01:45:10.039598 containerd[1582]: time="2025-11-24T01:45:10.039564195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 01:45:10.060916 kubelet[2268]: E1124 01:45:10.060858 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 01:45:10.065273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 01:45:10.065495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 01:45:10.066969 systemd[1]: kubelet.service: Consumed 263ms CPU time, 107.9M memory peak. Nov 24 01:45:11.065800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394302831.mount: Deactivated successfully. Nov 24 01:45:11.071996 containerd[1582]: time="2025-11-24T01:45:11.071923571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 01:45:11.073725 containerd[1582]: time="2025-11-24T01:45:11.073377857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 24 01:45:11.074533 containerd[1582]: time="2025-11-24T01:45:11.074492322Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 01:45:11.078133 containerd[1582]: time="2025-11-24T01:45:11.078074859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 01:45:11.079357 containerd[1582]: time="2025-11-24T01:45:11.079319962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.039714246s" Nov 24 01:45:11.079595 containerd[1582]: time="2025-11-24T01:45:11.079463702Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 01:45:11.080196 containerd[1582]: time="2025-11-24T01:45:11.080151614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 01:45:11.784567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130156620.mount: Deactivated successfully. Nov 24 01:45:16.927661 containerd[1582]: time="2025-11-24T01:45:16.927471019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:16.929057 containerd[1582]: time="2025-11-24T01:45:16.929011609Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Nov 24 01:45:16.946248 containerd[1582]: time="2025-11-24T01:45:16.946148445Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:16.951698 containerd[1582]: time="2025-11-24T01:45:16.951603880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:16.953662 containerd[1582]: time="2025-11-24T01:45:16.953290289Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.873093983s" Nov 24 01:45:16.953662 containerd[1582]: time="2025-11-24T01:45:16.953346985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 01:45:20.153593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 24 01:45:20.156864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:45:20.401890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:20.412291 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 01:45:20.515148 kubelet[2378]: E1124 01:45:20.515069 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 01:45:20.518277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 01:45:20.518521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 01:45:20.520167 systemd[1]: kubelet.service: Consumed 214ms CPU time, 107M memory peak. Nov 24 01:45:22.756262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:22.757411 systemd[1]: kubelet.service: Consumed 214ms CPU time, 107M memory peak. Nov 24 01:45:22.761342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:45:22.806726 systemd[1]: Reload requested from client PID 2393 ('systemctl') (unit session-11.scope)... Nov 24 01:45:22.806796 systemd[1]: Reloading... Nov 24 01:45:22.985652 zram_generator::config[2438]: No configuration found. Nov 24 01:45:23.322644 systemd[1]: Reloading finished in 515 ms. Nov 24 01:45:23.437381 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 01:45:23.437527 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 01:45:23.438201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:23.438335 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.6M memory peak. Nov 24 01:45:23.446567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:45:23.628498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:23.648637 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 01:45:23.736237 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 01:45:23.736237 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 01:45:23.736237 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 01:45:23.738456 kubelet[2505]: I1124 01:45:23.738359 2505 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 01:45:23.966568 kubelet[2505]: I1124 01:45:23.966044 2505 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 01:45:23.966568 kubelet[2505]: I1124 01:45:23.966103 2505 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 01:45:23.966953 kubelet[2505]: I1124 01:45:23.966931 2505 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 01:45:24.014315 kubelet[2505]: I1124 01:45:24.014150 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 01:45:24.015414 kubelet[2505]: E1124 01:45:24.015376 2505 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.76.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 01:45:24.032537 kubelet[2505]: I1124 01:45:24.032496 2505 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 01:45:24.044297 kubelet[2505]: I1124 01:45:24.044252 2505 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 01:45:24.053228 kubelet[2505]: I1124 01:45:24.053112 2505 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 01:45:24.056563 kubelet[2505]: I1124 01:45:24.053432 2505 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-7vvyr.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 01:45:24.057101 kubelet[2505]: I1124 01:45:24.057075 2505 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 01:45:24.057214 kubelet[2505]: I1124 01:45:24.057195 2505 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 01:45:24.057661 kubelet[2505]: I1124 01:45:24.057640 2505 state_mem.go:36] "Initialized new in-memory state store" Nov 24 01:45:24.076691 kubelet[2505]: I1124 01:45:24.076469 2505 kubelet.go:480] "Attempting to sync node with API server" Nov 24 01:45:24.076691 kubelet[2505]: I1124 01:45:24.076558 2505 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 01:45:24.079237 kubelet[2505]: I1124 01:45:24.079084 2505 kubelet.go:386] "Adding apiserver pod source" Nov 24 01:45:24.081015 kubelet[2505]: I1124 01:45:24.080845 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 01:45:24.094559 kubelet[2505]: E1124 01:45:24.094389 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.76.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-7vvyr.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 01:45:24.100359 kubelet[2505]: I1124 01:45:24.099069 2505 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 01:45:24.100359 kubelet[2505]: I1124 01:45:24.100199 2505 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 01:45:24.101740 kubelet[2505]: W1124 01:45:24.101534 2505 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 01:45:24.104971 kubelet[2505]: E1124 01:45:24.098371 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.76.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 01:45:24.113648 kubelet[2505]: I1124 01:45:24.113559 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 01:45:24.113801 kubelet[2505]: I1124 01:45:24.113708 2505 server.go:1289] "Started kubelet" Nov 24 01:45:24.117663 kubelet[2505]: I1124 01:45:24.117597 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 01:45:24.119448 kubelet[2505]: I1124 01:45:24.119339 2505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 01:45:24.125648 kubelet[2505]: I1124 01:45:24.125533 2505 server.go:317] "Adding debug handlers to kubelet server" Nov 24 01:45:24.128884 kubelet[2505]: I1124 01:45:24.128779 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 01:45:24.129206 kubelet[2505]: E1124 01:45:24.129156 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" Nov 24 01:45:24.130009 kubelet[2505]: I1124 01:45:24.129979 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 01:45:24.130206 kubelet[2505]: I1124 01:45:24.130116 2505 reconciler.go:26] "Reconciler: start to sync state" Nov 24 01:45:24.135846 kubelet[2505]: E1124 01:45:24.130497 2505 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.76.74:6443/api/v1/namespaces/default/events\": dial tcp 10.230.76.74:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-7vvyr.gb1.brightbox.com.187ace04cecbe1e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-7vvyr.gb1.brightbox.com,UID:srv-7vvyr.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-7vvyr.gb1.brightbox.com,},FirstTimestamp:2025-11-24 01:45:24.113629673 +0000 UTC m=+0.438192771,LastTimestamp:2025-11-24 01:45:24.113629673 +0000 UTC m=+0.438192771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-7vvyr.gb1.brightbox.com,}" Nov 24 01:45:24.137368 kubelet[2505]: I1124 01:45:24.137276 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 01:45:24.137938 kubelet[2505]: I1124 01:45:24.137738 2505 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 01:45:24.138130 kubelet[2505]: I1124 01:45:24.138079 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 01:45:24.140637 kubelet[2505]: I1124 01:45:24.139080 2505 factory.go:223] Registration of the systemd container factory successfully Nov 24 01:45:24.140637 kubelet[2505]: I1124 01:45:24.139222 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 01:45:24.143964 kubelet[2505]: I1124 01:45:24.143889 2505 factory.go:223] Registration of the containerd container factory successfully Nov 24 01:45:24.150550 kubelet[2505]: E1124 01:45:24.150504 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.76.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 01:45:24.151318 kubelet[2505]: E1124 01:45:24.151222 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7vvyr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.74:6443: connect: connection refused" interval="200ms" Nov 24 01:45:24.171404 kubelet[2505]: I1124 01:45:24.171345 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 01:45:24.173297 kubelet[2505]: I1124 01:45:24.173269 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 01:45:24.173512 kubelet[2505]: I1124 01:45:24.173465 2505 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 01:45:24.173675 kubelet[2505]: I1124 01:45:24.173645 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 01:45:24.173804 kubelet[2505]: I1124 01:45:24.173786 2505 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 01:45:24.173995 kubelet[2505]: E1124 01:45:24.173958 2505 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 01:45:24.181565 kubelet[2505]: E1124 01:45:24.181519 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.76.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 01:45:24.188160 kubelet[2505]: E1124 01:45:24.188099 2505 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 01:45:24.198727 kubelet[2505]: I1124 01:45:24.198694 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 01:45:24.198951 kubelet[2505]: I1124 01:45:24.198931 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 01:45:24.199081 kubelet[2505]: I1124 01:45:24.199063 2505 state_mem.go:36] "Initialized new in-memory state store" Nov 24 01:45:24.202507 kubelet[2505]: I1124 01:45:24.202475 2505 policy_none.go:49] "None policy: Start" Nov 24 01:45:24.202703 kubelet[2505]: I1124 01:45:24.202681 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 01:45:24.202847 kubelet[2505]: I1124 01:45:24.202829 2505 state_mem.go:35] "Initializing new in-memory state store" Nov 24 01:45:24.215353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 01:45:24.230350 kubelet[2505]: E1124 01:45:24.229487 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" Nov 24 01:45:24.231909 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 01:45:24.239740 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 01:45:24.250522 kubelet[2505]: E1124 01:45:24.250408 2505 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 01:45:24.250947 kubelet[2505]: I1124 01:45:24.250817 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 01:45:24.250947 kubelet[2505]: I1124 01:45:24.250862 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 01:45:24.251588 kubelet[2505]: I1124 01:45:24.251559 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 01:45:24.255048 kubelet[2505]: E1124 01:45:24.255016 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 01:45:24.255782 kubelet[2505]: E1124 01:45:24.255715 2505 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-7vvyr.gb1.brightbox.com\" not found" Nov 24 01:45:24.292992 systemd[1]: Created slice kubepods-burstable-pod13dd715d19d24c24277cc10851ee1f6c.slice - libcontainer container kubepods-burstable-pod13dd715d19d24c24277cc10851ee1f6c.slice. Nov 24 01:45:24.310400 kubelet[2505]: E1124 01:45:24.309966 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.317601 systemd[1]: Created slice kubepods-burstable-pod08fde5bf4bd7a511dd6dabf099878835.slice - libcontainer container kubepods-burstable-pod08fde5bf4bd7a511dd6dabf099878835.slice. Nov 24 01:45:24.322259 kubelet[2505]: E1124 01:45:24.322222 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.331587 kubelet[2505]: I1124 01:45:24.331528 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13dd715d19d24c24277cc10851ee1f6c-k8s-certs\") pod \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" (UID: \"13dd715d19d24c24277cc10851ee1f6c\") " pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.331876 kubelet[2505]: I1124 01:45:24.331595 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13dd715d19d24c24277cc10851ee1f6c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" (UID: \"13dd715d19d24c24277cc10851ee1f6c\") " pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.331876 kubelet[2505]: I1124 01:45:24.331665 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-k8s-certs\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.331876 kubelet[2505]: I1124 01:45:24.331694 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0877f5724b0015fe09404c8443a4869-kubeconfig\") pod \"kube-scheduler-srv-7vvyr.gb1.brightbox.com\" (UID: \"e0877f5724b0015fe09404c8443a4869\") " pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.331876 kubelet[2505]: I1124 01:45:24.331720 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13dd715d19d24c24277cc10851ee1f6c-ca-certs\") pod \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" (UID: \"13dd715d19d24c24277cc10851ee1f6c\") " pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.331876 kubelet[2505]: I1124 01:45:24.331744 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-ca-certs\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.332177 kubelet[2505]: I1124 01:45:24.331782 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-flexvolume-dir\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.332177 kubelet[2505]: I1124 01:45:24.331808 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-kubeconfig\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.332177 kubelet[2505]: I1124 01:45:24.331883 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.338822 systemd[1]: Created slice kubepods-burstable-pode0877f5724b0015fe09404c8443a4869.slice - libcontainer container kubepods-burstable-pode0877f5724b0015fe09404c8443a4869.slice. Nov 24 01:45:24.342240 kubelet[2505]: E1124 01:45:24.342208 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.352809 kubelet[2505]: E1124 01:45:24.352729 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7vvyr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.74:6443: connect: connection refused" interval="400ms" Nov 24 01:45:24.353680 kubelet[2505]: I1124 01:45:24.353657 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.354309 kubelet[2505]: E1124 01:45:24.354252 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.76.74:6443/api/v1/nodes\": dial tcp 10.230.76.74:6443: connect: connection refused" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.557301 kubelet[2505]: I1124 01:45:24.556825 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.557774 kubelet[2505]: E1124 01:45:24.557704 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.76.74:6443/api/v1/nodes\": dial tcp 10.230.76.74:6443: connect: connection refused" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.611978 containerd[1582]: time="2025-11-24T01:45:24.611884391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-7vvyr.gb1.brightbox.com,Uid:13dd715d19d24c24277cc10851ee1f6c,Namespace:kube-system,Attempt:0,}" Nov 24 01:45:24.624172 containerd[1582]: time="2025-11-24T01:45:24.623872112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-7vvyr.gb1.brightbox.com,Uid:08fde5bf4bd7a511dd6dabf099878835,Namespace:kube-system,Attempt:0,}" Nov 24 01:45:24.644355 containerd[1582]: time="2025-11-24T01:45:24.644278700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-7vvyr.gb1.brightbox.com,Uid:e0877f5724b0015fe09404c8443a4869,Namespace:kube-system,Attempt:0,}" Nov 24 01:45:24.754036 kubelet[2505]: E1124 01:45:24.753576 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7vvyr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.74:6443: connect: connection refused" interval="800ms" Nov 24 01:45:24.822007 containerd[1582]: time="2025-11-24T01:45:24.821786328Z" level=info msg="connecting to shim d241370b900c4758b8331314f13b820d6d1a71b3c45f19c4905e5fb45a5585b5" address="unix:///run/containerd/s/d80e300d494aead988d9f4255a6d2ace98d0a0560e748ab0fa2a794e87ef22a2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:24.824324 containerd[1582]: time="2025-11-24T01:45:24.824269251Z" level=info msg="connecting to shim e9710ca80054f20aef31ea472de4d84485fbcc1b63ad9f97445e497598ebad03" address="unix:///run/containerd/s/9147453be0d23aa1133940829ecdd9a540342bcdf19d6f326d195565b38f62e2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:24.825451 containerd[1582]: time="2025-11-24T01:45:24.825375198Z" level=info msg="connecting to shim 31fbc79daeb3d15287ffc9d85e85b4f2e5c4fc7d3edbff4751bcaca58f5645bc" address="unix:///run/containerd/s/70c6f8dba3203994f1ff8ecace85c3cee741b11a6b7b8d41866f387c52d19037" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:24.964024 kubelet[2505]: I1124 01:45:24.962820 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:24.966121 kubelet[2505]: E1124 01:45:24.965973 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.76.74:6443/api/v1/nodes\": dial tcp 10.230.76.74:6443: connect: connection refused" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:25.004983 systemd[1]: Started cri-containerd-31fbc79daeb3d15287ffc9d85e85b4f2e5c4fc7d3edbff4751bcaca58f5645bc.scope - libcontainer container 31fbc79daeb3d15287ffc9d85e85b4f2e5c4fc7d3edbff4751bcaca58f5645bc. Nov 24 01:45:25.008787 systemd[1]: Started cri-containerd-d241370b900c4758b8331314f13b820d6d1a71b3c45f19c4905e5fb45a5585b5.scope - libcontainer container d241370b900c4758b8331314f13b820d6d1a71b3c45f19c4905e5fb45a5585b5. Nov 24 01:45:25.011862 systemd[1]: Started cri-containerd-e9710ca80054f20aef31ea472de4d84485fbcc1b63ad9f97445e497598ebad03.scope - libcontainer container e9710ca80054f20aef31ea472de4d84485fbcc1b63ad9f97445e497598ebad03. Nov 24 01:45:25.148228 containerd[1582]: time="2025-11-24T01:45:25.148057209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-7vvyr.gb1.brightbox.com,Uid:e0877f5724b0015fe09404c8443a4869,Namespace:kube-system,Attempt:0,} returns sandbox id \"d241370b900c4758b8331314f13b820d6d1a71b3c45f19c4905e5fb45a5585b5\"" Nov 24 01:45:25.150278 kubelet[2505]: E1124 01:45:25.150110 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.76.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 01:45:25.164702 containerd[1582]: time="2025-11-24T01:45:25.163865808Z" level=info msg="CreateContainer within sandbox \"d241370b900c4758b8331314f13b820d6d1a71b3c45f19c4905e5fb45a5585b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 01:45:25.170690 containerd[1582]: time="2025-11-24T01:45:25.170127756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-7vvyr.gb1.brightbox.com,Uid:13dd715d19d24c24277cc10851ee1f6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"31fbc79daeb3d15287ffc9d85e85b4f2e5c4fc7d3edbff4751bcaca58f5645bc\"" Nov 24 01:45:25.179295 containerd[1582]: time="2025-11-24T01:45:25.179239813Z" level=info msg="CreateContainer within sandbox \"31fbc79daeb3d15287ffc9d85e85b4f2e5c4fc7d3edbff4751bcaca58f5645bc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 01:45:25.187050 containerd[1582]: time="2025-11-24T01:45:25.186979129Z" level=info msg="Container 9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:45:25.208887 containerd[1582]: time="2025-11-24T01:45:25.208822412Z" level=info msg="CreateContainer within sandbox \"d241370b900c4758b8331314f13b820d6d1a71b3c45f19c4905e5fb45a5585b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd\"" Nov 24 01:45:25.209993 containerd[1582]: time="2025-11-24T01:45:25.209960945Z" level=info msg="StartContainer for \"9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd\"" Nov 24 01:45:25.211836 containerd[1582]: time="2025-11-24T01:45:25.211802413Z" level=info msg="connecting to shim 9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd" address="unix:///run/containerd/s/d80e300d494aead988d9f4255a6d2ace98d0a0560e748ab0fa2a794e87ef22a2" protocol=ttrpc version=3 Nov 24 01:45:25.223964 containerd[1582]: time="2025-11-24T01:45:25.223905574Z" level=info msg="Container adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:45:25.229943 containerd[1582]: time="2025-11-24T01:45:25.229868219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-7vvyr.gb1.brightbox.com,Uid:08fde5bf4bd7a511dd6dabf099878835,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9710ca80054f20aef31ea472de4d84485fbcc1b63ad9f97445e497598ebad03\"" Nov 24 01:45:25.239403 kubelet[2505]: E1124 01:45:25.239241 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.76.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-7vvyr.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 01:45:25.242495 containerd[1582]: time="2025-11-24T01:45:25.242360005Z" level=info msg="CreateContainer within sandbox \"31fbc79daeb3d15287ffc9d85e85b4f2e5c4fc7d3edbff4751bcaca58f5645bc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575\"" Nov 24 01:45:25.242916 systemd[1]: Started cri-containerd-9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd.scope - libcontainer container 9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd. Nov 24 01:45:25.244714 containerd[1582]: time="2025-11-24T01:45:25.244640764Z" level=info msg="StartContainer for \"adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575\"" Nov 24 01:45:25.247364 containerd[1582]: time="2025-11-24T01:45:25.247324053Z" level=info msg="connecting to shim adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575" address="unix:///run/containerd/s/70c6f8dba3203994f1ff8ecace85c3cee741b11a6b7b8d41866f387c52d19037" protocol=ttrpc version=3 Nov 24 01:45:25.247985 containerd[1582]: time="2025-11-24T01:45:25.247939587Z" level=info msg="CreateContainer within sandbox \"e9710ca80054f20aef31ea472de4d84485fbcc1b63ad9f97445e497598ebad03\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 01:45:25.262431 containerd[1582]: time="2025-11-24T01:45:25.262372314Z" level=info msg="Container beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:45:25.277945 containerd[1582]: time="2025-11-24T01:45:25.277797972Z" level=info msg="CreateContainer within sandbox \"e9710ca80054f20aef31ea472de4d84485fbcc1b63ad9f97445e497598ebad03\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153\"" Nov 24 01:45:25.279644 containerd[1582]: time="2025-11-24T01:45:25.278713497Z" level=info msg="StartContainer for \"beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153\"" Nov 24 01:45:25.281359 containerd[1582]: time="2025-11-24T01:45:25.281319431Z" level=info msg="connecting to shim beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153" address="unix:///run/containerd/s/9147453be0d23aa1133940829ecdd9a540342bcdf19d6f326d195565b38f62e2" protocol=ttrpc version=3 Nov 24 01:45:25.291999 systemd[1]: Started cri-containerd-adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575.scope - libcontainer container adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575. Nov 24 01:45:25.323063 systemd[1]: Started cri-containerd-beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153.scope - libcontainer container beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153. Nov 24 01:45:25.421020 containerd[1582]: time="2025-11-24T01:45:25.420946888Z" level=info msg="StartContainer for \"9aa22933b7e003d27b11b606d6773aa7e02cac91dbf261a06bb4707fb0643cbd\" returns successfully" Nov 24 01:45:25.445036 containerd[1582]: time="2025-11-24T01:45:25.444983575Z" level=info msg="StartContainer for \"adaea81c931ba7f05d7592fdb13e614fa27d089472341255dbc6c469d37ba575\" returns successfully" Nov 24 01:45:25.449661 kubelet[2505]: E1124 01:45:25.449581 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.76.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 01:45:25.471903 containerd[1582]: time="2025-11-24T01:45:25.471851706Z" level=info msg="StartContainer for \"beccee40c0bd49892680420783efc92475f58e49e3ecf0625db77a98747ce153\" returns successfully" Nov 24 01:45:25.553712 kubelet[2505]: E1124 01:45:25.553660 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.76.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.76.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 01:45:25.555294 kubelet[2505]: E1124 01:45:25.555249 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-7vvyr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.74:6443: connect: connection refused" interval="1.6s" Nov 24 01:45:25.770313 kubelet[2505]: I1124 01:45:25.769775 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:25.770313 kubelet[2505]: E1124 01:45:25.770235 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.76.74:6443/api/v1/nodes\": dial tcp 10.230.76.74:6443: connect: connection refused" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:26.229377 kubelet[2505]: E1124 01:45:26.229334 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:26.236279 kubelet[2505]: E1124 01:45:26.236239 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:26.239321 kubelet[2505]: E1124 01:45:26.239285 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:27.242147 kubelet[2505]: E1124 01:45:27.242097 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:27.243640 kubelet[2505]: E1124 01:45:27.243128 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:27.243640 kubelet[2505]: E1124 01:45:27.243570 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:27.375003 kubelet[2505]: I1124 01:45:27.374962 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.229849 kubelet[2505]: E1124 01:45:28.229789 2505 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.244286 kubelet[2505]: E1124 01:45:28.244249 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.246026 kubelet[2505]: E1124 01:45:28.245279 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.304916 kubelet[2505]: I1124 01:45:28.304676 2505 kubelet_node_status.go:78] "Successfully registered node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.304916 kubelet[2505]: E1124 01:45:28.304740 2505 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-7vvyr.gb1.brightbox.com\": node \"srv-7vvyr.gb1.brightbox.com\" not found" Nov 24 01:45:28.330345 kubelet[2505]: I1124 01:45:28.330260 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.350333 kubelet[2505]: E1124 01:45:28.350282 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-7vvyr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.350333 kubelet[2505]: I1124 01:45:28.350324 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.355252 kubelet[2505]: E1124 01:45:28.355168 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.355252 kubelet[2505]: I1124 01:45:28.355248 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:28.357510 kubelet[2505]: E1124 01:45:28.357480 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:29.101657 kubelet[2505]: I1124 01:45:29.101333 2505 apiserver.go:52] "Watching apiserver" Nov 24 01:45:29.130892 kubelet[2505]: I1124 01:45:29.130825 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 01:45:30.670730 systemd[1]: Reload requested from client PID 2791 ('systemctl') (unit session-11.scope)... Nov 24 01:45:30.671263 systemd[1]: Reloading... Nov 24 01:45:30.799712 zram_generator::config[2836]: No configuration found. Nov 24 01:45:31.112775 kubelet[2505]: I1124 01:45:31.111590 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.123606 kubelet[2505]: I1124 01:45:31.123352 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 01:45:31.192741 systemd[1]: Reloading finished in 520 ms. Nov 24 01:45:31.241037 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:45:31.254533 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 01:45:31.255182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:31.255416 systemd[1]: kubelet.service: Consumed 992ms CPU time, 127.4M memory peak. Nov 24 01:45:31.260979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 01:45:31.566312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 01:45:31.579262 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 01:45:31.652265 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 01:45:31.653159 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 01:45:31.653159 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 01:45:31.653159 kubelet[2900]: I1124 01:45:31.652904 2900 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 01:45:31.665963 kubelet[2900]: I1124 01:45:31.665909 2900 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 01:45:31.665963 kubelet[2900]: I1124 01:45:31.665954 2900 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 01:45:31.666307 kubelet[2900]: I1124 01:45:31.666250 2900 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 01:45:31.668536 kubelet[2900]: I1124 01:45:31.668468 2900 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 01:45:31.672901 kubelet[2900]: I1124 01:45:31.672486 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 01:45:31.692380 kubelet[2900]: I1124 01:45:31.692341 2900 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 01:45:31.703683 kubelet[2900]: I1124 01:45:31.702897 2900 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 01:45:31.703683 kubelet[2900]: I1124 01:45:31.703234 2900 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 01:45:31.703683 kubelet[2900]: I1124 01:45:31.703270 2900 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-7vvyr.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 01:45:31.704073 kubelet[2900]: I1124 01:45:31.704051 2900 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 01:45:31.704169 kubelet[2900]: I1124 01:45:31.704153 2900 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 01:45:31.704527 kubelet[2900]: I1124 01:45:31.704304 2900 state_mem.go:36] "Initialized new in-memory state store" Nov 24 01:45:31.705700 kubelet[2900]: I1124 01:45:31.705465 2900 kubelet.go:480] "Attempting to sync node with API server" Nov 24 01:45:31.705855 kubelet[2900]: I1124 01:45:31.705835 2900 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 01:45:31.705971 kubelet[2900]: I1124 01:45:31.705954 2900 kubelet.go:386] "Adding apiserver pod source" Nov 24 01:45:31.706089 kubelet[2900]: I1124 01:45:31.706069 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 01:45:31.717384 kubelet[2900]: I1124 01:45:31.716106 2900 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 01:45:31.717384 kubelet[2900]: I1124 01:45:31.716728 2900 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 01:45:31.724650 kubelet[2900]: I1124 01:45:31.724400 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 01:45:31.724650 kubelet[2900]: I1124 01:45:31.724463 2900 server.go:1289] "Started kubelet" Nov 24 01:45:31.728689 kubelet[2900]: I1124 01:45:31.727919 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 01:45:31.742014 kubelet[2900]: I1124 01:45:31.741949 2900 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 01:45:31.753896 kubelet[2900]: I1124 01:45:31.753779 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 01:45:31.754812 kubelet[2900]: E1124 01:45:31.754453 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-7vvyr.gb1.brightbox.com\" not found" Nov 24 01:45:31.755426 kubelet[2900]: I1124 01:45:31.755167 2900 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 01:45:31.755752 kubelet[2900]: I1124 01:45:31.751591 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 01:45:31.756512 kubelet[2900]: I1124 01:45:31.756178 2900 reconciler.go:26] "Reconciler: start to sync state" Nov 24 01:45:31.761843 kubelet[2900]: I1124 01:45:31.755773 2900 server.go:317] "Adding debug handlers to kubelet server" Nov 24 01:45:31.762640 kubelet[2900]: I1124 01:45:31.742727 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 01:45:31.769239 kubelet[2900]: I1124 01:45:31.769034 2900 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 01:45:31.783698 kubelet[2900]: E1124 01:45:31.782847 2900 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 01:45:31.786760 kubelet[2900]: I1124 01:45:31.786726 2900 factory.go:223] Registration of the containerd container factory successfully Nov 24 01:45:31.786940 kubelet[2900]: I1124 01:45:31.786922 2900 factory.go:223] Registration of the systemd container factory successfully Nov 24 01:45:31.787201 kubelet[2900]: I1124 01:45:31.787164 2900 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 01:45:31.831657 kubelet[2900]: I1124 01:45:31.830993 2900 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 01:45:31.839092 kubelet[2900]: I1124 01:45:31.839053 2900 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 01:45:31.839294 kubelet[2900]: I1124 01:45:31.839276 2900 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 01:45:31.839404 kubelet[2900]: I1124 01:45:31.839385 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 01:45:31.839900 kubelet[2900]: I1124 01:45:31.839479 2900 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 01:45:31.839900 kubelet[2900]: E1124 01:45:31.839546 2900 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889608 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889662 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889690 2900 state_mem.go:36] "Initialized new in-memory state store" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889894 2900 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889919 2900 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889945 2900 policy_none.go:49] "None policy: Start" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889960 2900 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.889977 2900 state_mem.go:35] "Initializing new in-memory state store" Nov 24 01:45:31.890449 kubelet[2900]: I1124 01:45:31.890099 2900 state_mem.go:75] "Updated machine memory state" Nov 24 01:45:31.898587 kubelet[2900]: E1124 01:45:31.898250 2900 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 01:45:31.898587 kubelet[2900]: I1124 01:45:31.898507 2900 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 01:45:31.898587 kubelet[2900]: I1124 01:45:31.898524 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 01:45:31.902706 kubelet[2900]: I1124 01:45:31.901477 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 01:45:31.906633 kubelet[2900]: E1124 01:45:31.905541 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 01:45:31.940650 kubelet[2900]: I1124 01:45:31.940580 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.943513 kubelet[2900]: I1124 01:45:31.943451 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.944021 kubelet[2900]: I1124 01:45:31.942085 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.954099 kubelet[2900]: I1124 01:45:31.954051 2900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 01:45:31.957353 kubelet[2900]: I1124 01:45:31.957271 2900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 01:45:31.958502 kubelet[2900]: I1124 01:45:31.958472 2900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 01:45:31.958565 kubelet[2900]: E1124 01:45:31.958538 2900 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-7vvyr.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964170 kubelet[2900]: I1124 01:45:31.963774 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-ca-certs\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964170 kubelet[2900]: I1124 01:45:31.963864 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-k8s-certs\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964170 kubelet[2900]: I1124 01:45:31.963898 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-kubeconfig\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964170 kubelet[2900]: I1124 01:45:31.963935 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964170 kubelet[2900]: I1124 01:45:31.963967 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13dd715d19d24c24277cc10851ee1f6c-k8s-certs\") pod \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" (UID: \"13dd715d19d24c24277cc10851ee1f6c\") " pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964572 kubelet[2900]: I1124 01:45:31.963996 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08fde5bf4bd7a511dd6dabf099878835-flexvolume-dir\") pod \"kube-controller-manager-srv-7vvyr.gb1.brightbox.com\" (UID: \"08fde5bf4bd7a511dd6dabf099878835\") " pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964572 kubelet[2900]: I1124 01:45:31.964023 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0877f5724b0015fe09404c8443a4869-kubeconfig\") pod \"kube-scheduler-srv-7vvyr.gb1.brightbox.com\" (UID: \"e0877f5724b0015fe09404c8443a4869\") " pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964572 kubelet[2900]: I1124 01:45:31.964047 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13dd715d19d24c24277cc10851ee1f6c-ca-certs\") pod \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" (UID: \"13dd715d19d24c24277cc10851ee1f6c\") " pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:31.964572 kubelet[2900]: I1124 01:45:31.964072 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13dd715d19d24c24277cc10851ee1f6c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-7vvyr.gb1.brightbox.com\" (UID: \"13dd715d19d24c24277cc10851ee1f6c\") " pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:32.020663 kubelet[2900]: I1124 01:45:32.019565 2900 kubelet_node_status.go:75] "Attempting to register node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:32.036319 kubelet[2900]: I1124 01:45:32.036243 2900 kubelet_node_status.go:124] "Node was previously registered" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:32.037298 kubelet[2900]: I1124 01:45:32.036749 2900 kubelet_node_status.go:78] "Successfully registered node" node="srv-7vvyr.gb1.brightbox.com" Nov 24 01:45:32.716037 kubelet[2900]: I1124 01:45:32.715983 2900 apiserver.go:52] "Watching apiserver" Nov 24 01:45:32.756378 kubelet[2900]: I1124 01:45:32.756286 2900 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 01:45:32.963162 kubelet[2900]: I1124 01:45:32.963079 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-7vvyr.gb1.brightbox.com" podStartSLOduration=1.963037487 podStartE2EDuration="1.963037487s" podCreationTimestamp="2025-11-24 01:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 01:45:32.929337282 +0000 UTC m=+1.342125029" watchObservedRunningTime="2025-11-24 01:45:32.963037487 +0000 UTC m=+1.375825216" Nov 24 01:45:32.963406 kubelet[2900]: I1124 01:45:32.963222 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-7vvyr.gb1.brightbox.com" podStartSLOduration=1.963215156 podStartE2EDuration="1.963215156s" podCreationTimestamp="2025-11-24 01:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 01:45:32.960874077 +0000 UTC m=+1.373661833" watchObservedRunningTime="2025-11-24 01:45:32.963215156 +0000 UTC m=+1.376002883" Nov 24 01:45:32.984002 kubelet[2900]: I1124 01:45:32.983813 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-7vvyr.gb1.brightbox.com" podStartSLOduration=1.983792614 podStartE2EDuration="1.983792614s" podCreationTimestamp="2025-11-24 01:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 01:45:32.981117604 +0000 UTC m=+1.393905362" watchObservedRunningTime="2025-11-24 01:45:32.983792614 +0000 UTC m=+1.396580345" Nov 24 01:45:36.810696 kubelet[2900]: I1124 01:45:36.810545 2900 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 01:45:36.812673 containerd[1582]: time="2025-11-24T01:45:36.811862024Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 01:45:36.813255 kubelet[2900]: I1124 01:45:36.812123 2900 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 01:45:37.672169 systemd[1]: Created slice kubepods-besteffort-podd8feb83b_db99_4368_8722_05f852643839.slice - libcontainer container kubepods-besteffort-podd8feb83b_db99_4368_8722_05f852643839.slice. Nov 24 01:45:37.705803 kubelet[2900]: I1124 01:45:37.705701 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8feb83b-db99-4368-8722-05f852643839-lib-modules\") pod \"kube-proxy-bx2fw\" (UID: \"d8feb83b-db99-4368-8722-05f852643839\") " pod="kube-system/kube-proxy-bx2fw" Nov 24 01:45:37.706247 kubelet[2900]: I1124 01:45:37.706065 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l9b\" (UniqueName: \"kubernetes.io/projected/d8feb83b-db99-4368-8722-05f852643839-kube-api-access-d5l9b\") pod \"kube-proxy-bx2fw\" (UID: \"d8feb83b-db99-4368-8722-05f852643839\") " pod="kube-system/kube-proxy-bx2fw" Nov 24 01:45:37.706247 kubelet[2900]: I1124 01:45:37.706140 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8feb83b-db99-4368-8722-05f852643839-kube-proxy\") pod \"kube-proxy-bx2fw\" (UID: \"d8feb83b-db99-4368-8722-05f852643839\") " pod="kube-system/kube-proxy-bx2fw" Nov 24 01:45:37.706247 kubelet[2900]: I1124 01:45:37.706191 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8feb83b-db99-4368-8722-05f852643839-xtables-lock\") pod \"kube-proxy-bx2fw\" (UID: \"d8feb83b-db99-4368-8722-05f852643839\") " pod="kube-system/kube-proxy-bx2fw" Nov 24 01:45:37.986381 containerd[1582]: time="2025-11-24T01:45:37.986229626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bx2fw,Uid:d8feb83b-db99-4368-8722-05f852643839,Namespace:kube-system,Attempt:0,}" Nov 24 01:45:38.019144 containerd[1582]: time="2025-11-24T01:45:38.019036312Z" level=info msg="connecting to shim 9ef448f10560010b488e4cdbe4fa86b10321efe3e26cdf296068443e1aa8cc28" address="unix:///run/containerd/s/5ddda601356a8baace1d8fcf20cccb3fd1b0109130a46a57b21592b3a7e57146" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:38.093389 systemd[1]: Started cri-containerd-9ef448f10560010b488e4cdbe4fa86b10321efe3e26cdf296068443e1aa8cc28.scope - libcontainer container 9ef448f10560010b488e4cdbe4fa86b10321efe3e26cdf296068443e1aa8cc28. Nov 24 01:45:38.158309 systemd[1]: Created slice kubepods-besteffort-pode3c4498a_8194_46dd_aa11_0da05c19314b.slice - libcontainer container kubepods-besteffort-pode3c4498a_8194_46dd_aa11_0da05c19314b.slice. Nov 24 01:45:38.208877 containerd[1582]: time="2025-11-24T01:45:38.208747178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bx2fw,Uid:d8feb83b-db99-4368-8722-05f852643839,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ef448f10560010b488e4cdbe4fa86b10321efe3e26cdf296068443e1aa8cc28\"" Nov 24 01:45:38.209513 kubelet[2900]: I1124 01:45:38.209316 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3c4498a-8194-46dd-aa11-0da05c19314b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sd8vq\" (UID: \"e3c4498a-8194-46dd-aa11-0da05c19314b\") " pod="tigera-operator/tigera-operator-7dcd859c48-sd8vq" Nov 24 01:45:38.210824 kubelet[2900]: I1124 01:45:38.210686 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxpd\" (UniqueName: \"kubernetes.io/projected/e3c4498a-8194-46dd-aa11-0da05c19314b-kube-api-access-nkxpd\") pod \"tigera-operator-7dcd859c48-sd8vq\" (UID: \"e3c4498a-8194-46dd-aa11-0da05c19314b\") " pod="tigera-operator/tigera-operator-7dcd859c48-sd8vq" Nov 24 01:45:38.216520 containerd[1582]: time="2025-11-24T01:45:38.216446583Z" level=info msg="CreateContainer within sandbox \"9ef448f10560010b488e4cdbe4fa86b10321efe3e26cdf296068443e1aa8cc28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 01:45:38.231840 containerd[1582]: time="2025-11-24T01:45:38.231787530Z" level=info msg="Container c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:45:38.244776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943174533.mount: Deactivated successfully. Nov 24 01:45:38.249385 containerd[1582]: time="2025-11-24T01:45:38.249318238Z" level=info msg="CreateContainer within sandbox \"9ef448f10560010b488e4cdbe4fa86b10321efe3e26cdf296068443e1aa8cc28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347\"" Nov 24 01:45:38.252601 containerd[1582]: time="2025-11-24T01:45:38.250572550Z" level=info msg="StartContainer for \"c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347\"" Nov 24 01:45:38.252601 containerd[1582]: time="2025-11-24T01:45:38.252511808Z" level=info msg="connecting to shim c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347" address="unix:///run/containerd/s/5ddda601356a8baace1d8fcf20cccb3fd1b0109130a46a57b21592b3a7e57146" protocol=ttrpc version=3 Nov 24 01:45:38.278958 systemd[1]: Started cri-containerd-c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347.scope - libcontainer container c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347. Nov 24 01:45:38.383205 containerd[1582]: time="2025-11-24T01:45:38.383120986Z" level=info msg="StartContainer for \"c1a05c7c62f944a4fc85a679c97054cc6a95917e6c6765d6e1c850cafabc5347\" returns successfully" Nov 24 01:45:38.463296 containerd[1582]: time="2025-11-24T01:45:38.463244526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sd8vq,Uid:e3c4498a-8194-46dd-aa11-0da05c19314b,Namespace:tigera-operator,Attempt:0,}" Nov 24 01:45:38.488990 containerd[1582]: time="2025-11-24T01:45:38.488907136Z" level=info msg="connecting to shim 6e66330d9f25503ee8b21a4e1df5ff5933c4f2a83c8182ceb0424cd84bb5192d" address="unix:///run/containerd/s/a7e6bfcff2dc3c53e0cd2e3715b03d09258c1df4ec92c62506516dd8bdd8c6bb" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:38.537922 systemd[1]: Started cri-containerd-6e66330d9f25503ee8b21a4e1df5ff5933c4f2a83c8182ceb0424cd84bb5192d.scope - libcontainer container 6e66330d9f25503ee8b21a4e1df5ff5933c4f2a83c8182ceb0424cd84bb5192d. Nov 24 01:45:38.660554 containerd[1582]: time="2025-11-24T01:45:38.660503159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sd8vq,Uid:e3c4498a-8194-46dd-aa11-0da05c19314b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e66330d9f25503ee8b21a4e1df5ff5933c4f2a83c8182ceb0424cd84bb5192d\"" Nov 24 01:45:38.666275 containerd[1582]: time="2025-11-24T01:45:38.666206418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 01:45:38.920264 kubelet[2900]: I1124 01:45:38.919841 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bx2fw" podStartSLOduration=1.9198201240000001 podStartE2EDuration="1.919820124s" podCreationTimestamp="2025-11-24 01:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 01:45:38.919065565 +0000 UTC m=+7.331853318" watchObservedRunningTime="2025-11-24 01:45:38.919820124 +0000 UTC m=+7.332607857" Nov 24 01:45:40.771613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378098915.mount: Deactivated successfully. Nov 24 01:45:41.905637 containerd[1582]: time="2025-11-24T01:45:41.903951647Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:41.908639 containerd[1582]: time="2025-11-24T01:45:41.906726036Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 01:45:41.911358 containerd[1582]: time="2025-11-24T01:45:41.911321524Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:41.916573 containerd[1582]: time="2025-11-24T01:45:41.916408536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:41.929860 containerd[1582]: time="2025-11-24T01:45:41.929166037Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.26278962s" Nov 24 01:45:41.930690 containerd[1582]: time="2025-11-24T01:45:41.930061625Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 01:45:41.963030 containerd[1582]: time="2025-11-24T01:45:41.962968470Z" level=info msg="CreateContainer within sandbox \"6e66330d9f25503ee8b21a4e1df5ff5933c4f2a83c8182ceb0424cd84bb5192d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 01:45:41.987252 containerd[1582]: time="2025-11-24T01:45:41.986643791Z" level=info msg="Container 7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:45:41.988910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790473432.mount: Deactivated successfully. Nov 24 01:45:41.996774 containerd[1582]: time="2025-11-24T01:45:41.996715976Z" level=info msg="CreateContainer within sandbox \"6e66330d9f25503ee8b21a4e1df5ff5933c4f2a83c8182ceb0424cd84bb5192d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f\"" Nov 24 01:45:41.999644 containerd[1582]: time="2025-11-24T01:45:41.998768219Z" level=info msg="StartContainer for \"7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f\"" Nov 24 01:45:42.002694 containerd[1582]: time="2025-11-24T01:45:42.002655388Z" level=info msg="connecting to shim 7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f" address="unix:///run/containerd/s/a7e6bfcff2dc3c53e0cd2e3715b03d09258c1df4ec92c62506516dd8bdd8c6bb" protocol=ttrpc version=3 Nov 24 01:45:42.040975 systemd[1]: Started cri-containerd-7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f.scope - libcontainer container 7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f. Nov 24 01:45:42.091137 containerd[1582]: time="2025-11-24T01:45:42.091088391Z" level=info msg="StartContainer for \"7d4459520f290459ec99141d290d9cd732dbddb4a9c5f79ad60fa17d9fcc315f\" returns successfully" Nov 24 01:45:42.926677 kubelet[2900]: I1124 01:45:42.926161 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sd8vq" podStartSLOduration=1.654806868 podStartE2EDuration="4.926144651s" podCreationTimestamp="2025-11-24 01:45:38 +0000 UTC" firstStartedPulling="2025-11-24 01:45:38.664787751 +0000 UTC m=+7.077575471" lastFinishedPulling="2025-11-24 01:45:41.936125532 +0000 UTC m=+10.348913254" observedRunningTime="2025-11-24 01:45:42.926069359 +0000 UTC m=+11.338857113" watchObservedRunningTime="2025-11-24 01:45:42.926144651 +0000 UTC m=+11.338932378" Nov 24 01:45:49.829360 sudo[1882]: pam_unix(sudo:session): session closed for user root Nov 24 01:45:49.976760 sshd[1881]: Connection closed by 139.178.68.195 port 42106 Nov 24 01:45:49.978657 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Nov 24 01:45:49.987906 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Nov 24 01:45:49.989141 systemd[1]: sshd@8-10.230.76.74:22-139.178.68.195:42106.service: Deactivated successfully. Nov 24 01:45:50.000088 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 01:45:50.001417 systemd[1]: session-11.scope: Consumed 8.401s CPU time, 157.5M memory peak. Nov 24 01:45:50.012050 systemd-logind[1561]: Removed session 11. Nov 24 01:45:56.103930 systemd[1]: Created slice kubepods-besteffort-pod616f21a0_a1ad_423d_92ad_4cfc8ffc4f86.slice - libcontainer container kubepods-besteffort-pod616f21a0_a1ad_423d_92ad_4cfc8ffc4f86.slice. Nov 24 01:45:56.147030 kubelet[2900]: I1124 01:45:56.146977 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616f21a0-a1ad-423d-92ad-4cfc8ffc4f86-tigera-ca-bundle\") pod \"calico-typha-58cdbb7985-9g9nh\" (UID: \"616f21a0-a1ad-423d-92ad-4cfc8ffc4f86\") " pod="calico-system/calico-typha-58cdbb7985-9g9nh" Nov 24 01:45:56.147030 kubelet[2900]: I1124 01:45:56.147038 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/616f21a0-a1ad-423d-92ad-4cfc8ffc4f86-typha-certs\") pod \"calico-typha-58cdbb7985-9g9nh\" (UID: \"616f21a0-a1ad-423d-92ad-4cfc8ffc4f86\") " pod="calico-system/calico-typha-58cdbb7985-9g9nh" Nov 24 01:45:56.148340 kubelet[2900]: I1124 01:45:56.147074 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2b4p\" (UniqueName: \"kubernetes.io/projected/616f21a0-a1ad-423d-92ad-4cfc8ffc4f86-kube-api-access-h2b4p\") pod \"calico-typha-58cdbb7985-9g9nh\" (UID: \"616f21a0-a1ad-423d-92ad-4cfc8ffc4f86\") " pod="calico-system/calico-typha-58cdbb7985-9g9nh" Nov 24 01:45:56.220210 systemd[1]: Created slice kubepods-besteffort-podee7b2794_1454_4f11_a2ec_f627b967e1da.slice - libcontainer container kubepods-besteffort-podee7b2794_1454_4f11_a2ec_f627b967e1da.slice. Nov 24 01:45:56.247728 kubelet[2900]: I1124 01:45:56.247664 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee7b2794-1454-4f11-a2ec-f627b967e1da-node-certs\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.247728 kubelet[2900]: I1124 01:45:56.247736 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-var-run-calico\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.248017 kubelet[2900]: I1124 01:45:56.247768 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-policysync\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.248017 kubelet[2900]: I1124 01:45:56.247796 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-flexvol-driver-host\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.248017 kubelet[2900]: I1124 01:45:56.247855 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-cni-bin-dir\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.249599 kubelet[2900]: I1124 01:45:56.249561 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee7b2794-1454-4f11-a2ec-f627b967e1da-tigera-ca-bundle\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.249789 kubelet[2900]: I1124 01:45:56.249760 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-cni-net-dir\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.252756 kubelet[2900]: I1124 01:45:56.249801 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-lib-modules\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.252756 kubelet[2900]: I1124 01:45:56.249829 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-xtables-lock\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.252756 kubelet[2900]: I1124 01:45:56.249858 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-cni-log-dir\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.252756 kubelet[2900]: I1124 01:45:56.249887 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee7b2794-1454-4f11-a2ec-f627b967e1da-var-lib-calico\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.252756 kubelet[2900]: I1124 01:45:56.249918 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ftw\" (UniqueName: \"kubernetes.io/projected/ee7b2794-1454-4f11-a2ec-f627b967e1da-kube-api-access-c2ftw\") pod \"calico-node-n7x5k\" (UID: \"ee7b2794-1454-4f11-a2ec-f627b967e1da\") " pod="calico-system/calico-node-n7x5k" Nov 24 01:45:56.339241 kubelet[2900]: E1124 01:45:56.338884 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:45:56.363382 kubelet[2900]: E1124 01:45:56.363012 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.363382 kubelet[2900]: W1124 01:45:56.363057 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.363382 kubelet[2900]: E1124 01:45:56.363132 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.390075 kubelet[2900]: E1124 01:45:56.387843 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.390075 kubelet[2900]: W1124 01:45:56.387904 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.390075 kubelet[2900]: E1124 01:45:56.387946 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.409741 kubelet[2900]: E1124 01:45:56.409498 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.410353 kubelet[2900]: W1124 01:45:56.409531 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.410353 kubelet[2900]: E1124 01:45:56.410041 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.410793 kubelet[2900]: E1124 01:45:56.410685 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.411024 kubelet[2900]: W1124 01:45:56.410987 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.411410 kubelet[2900]: E1124 01:45:56.411236 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.411583 kubelet[2900]: E1124 01:45:56.411564 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.411734 kubelet[2900]: W1124 01:45:56.411712 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.411823 kubelet[2900]: E1124 01:45:56.411804 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.412392 kubelet[2900]: E1124 01:45:56.412230 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.412392 kubelet[2900]: W1124 01:45:56.412249 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.412392 kubelet[2900]: E1124 01:45:56.412265 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.413052 kubelet[2900]: E1124 01:45:56.413000 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.413235 kubelet[2900]: W1124 01:45:56.413204 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.413368 kubelet[2900]: E1124 01:45:56.413343 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.413860 kubelet[2900]: E1124 01:45:56.413839 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.414238 kubelet[2900]: W1124 01:45:56.414050 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.414238 kubelet[2900]: E1124 01:45:56.414084 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.414457 kubelet[2900]: E1124 01:45:56.414437 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.414789 kubelet[2900]: W1124 01:45:56.414565 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.414789 kubelet[2900]: E1124 01:45:56.414593 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.415025 kubelet[2900]: E1124 01:45:56.415005 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.415146 kubelet[2900]: W1124 01:45:56.415112 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.415265 kubelet[2900]: E1124 01:45:56.415242 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.416080 kubelet[2900]: E1124 01:45:56.415680 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.416080 kubelet[2900]: W1124 01:45:56.415857 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.416080 kubelet[2900]: E1124 01:45:56.415876 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.418466 kubelet[2900]: E1124 01:45:56.417704 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.418466 kubelet[2900]: W1124 01:45:56.418112 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.418466 kubelet[2900]: E1124 01:45:56.418192 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.418794 kubelet[2900]: E1124 01:45:56.418771 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.418950 kubelet[2900]: W1124 01:45:56.418924 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.419069 kubelet[2900]: E1124 01:45:56.419048 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.419718 kubelet[2900]: E1124 01:45:56.419545 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.419718 kubelet[2900]: W1124 01:45:56.419563 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.419718 kubelet[2900]: E1124 01:45:56.419580 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.422316 kubelet[2900]: E1124 01:45:56.421994 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.422316 kubelet[2900]: W1124 01:45:56.422043 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.422316 kubelet[2900]: E1124 01:45:56.422086 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.423516 kubelet[2900]: E1124 01:45:56.422864 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.423998 kubelet[2900]: W1124 01:45:56.423672 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.423998 kubelet[2900]: E1124 01:45:56.423724 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.424273 kubelet[2900]: E1124 01:45:56.424209 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.424475 kubelet[2900]: W1124 01:45:56.424406 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.424752 kubelet[2900]: E1124 01:45:56.424729 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.426885 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.429677 kubelet[2900]: W1124 01:45:56.426946 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.426992 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.427494 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.429677 kubelet[2900]: W1124 01:45:56.427523 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.427553 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.427822 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.429677 kubelet[2900]: W1124 01:45:56.427836 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.427857 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.429677 kubelet[2900]: E1124 01:45:56.428083 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.430419 kubelet[2900]: W1124 01:45:56.428098 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.430419 kubelet[2900]: E1124 01:45:56.428113 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.430419 kubelet[2900]: E1124 01:45:56.428443 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.430419 kubelet[2900]: W1124 01:45:56.428456 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.430419 kubelet[2900]: E1124 01:45:56.428472 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.431857 containerd[1582]: time="2025-11-24T01:45:56.431738222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58cdbb7985-9g9nh,Uid:616f21a0-a1ad-423d-92ad-4cfc8ffc4f86,Namespace:calico-system,Attempt:0,}" Nov 24 01:45:56.453272 kubelet[2900]: E1124 01:45:56.453209 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.454647 kubelet[2900]: W1124 01:45:56.453964 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.454647 kubelet[2900]: E1124 01:45:56.454029 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.454647 kubelet[2900]: I1124 01:45:56.454114 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d4c21c8f-271a-4e0d-ab8d-b3169fe61687-varrun\") pod \"csi-node-driver-dc98b\" (UID: \"d4c21c8f-271a-4e0d-ab8d-b3169fe61687\") " pod="calico-system/csi-node-driver-dc98b" Nov 24 01:45:56.455181 kubelet[2900]: E1124 01:45:56.455084 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.455321 kubelet[2900]: W1124 01:45:56.455297 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.455493 kubelet[2900]: E1124 01:45:56.455471 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.455656 kubelet[2900]: I1124 01:45:56.455606 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4c21c8f-271a-4e0d-ab8d-b3169fe61687-registration-dir\") pod \"csi-node-driver-dc98b\" (UID: \"d4c21c8f-271a-4e0d-ab8d-b3169fe61687\") " pod="calico-system/csi-node-driver-dc98b" Nov 24 01:45:56.456209 kubelet[2900]: E1124 01:45:56.456009 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.456209 kubelet[2900]: W1124 01:45:56.456029 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.456209 kubelet[2900]: E1124 01:45:56.456045 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.456209 kubelet[2900]: I1124 01:45:56.456067 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9784g\" (UniqueName: \"kubernetes.io/projected/d4c21c8f-271a-4e0d-ab8d-b3169fe61687-kube-api-access-9784g\") pod \"csi-node-driver-dc98b\" (UID: \"d4c21c8f-271a-4e0d-ab8d-b3169fe61687\") " pod="calico-system/csi-node-driver-dc98b" Nov 24 01:45:56.456607 kubelet[2900]: E1124 01:45:56.456569 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.456756 kubelet[2900]: W1124 01:45:56.456730 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.456844 kubelet[2900]: E1124 01:45:56.456825 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.457354 kubelet[2900]: I1124 01:45:56.457110 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4c21c8f-271a-4e0d-ab8d-b3169fe61687-kubelet-dir\") pod \"csi-node-driver-dc98b\" (UID: \"d4c21c8f-271a-4e0d-ab8d-b3169fe61687\") " pod="calico-system/csi-node-driver-dc98b" Nov 24 01:45:56.457354 kubelet[2900]: E1124 01:45:56.457201 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.457354 kubelet[2900]: W1124 01:45:56.457215 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.457354 kubelet[2900]: E1124 01:45:56.457230 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.458207 kubelet[2900]: E1124 01:45:56.458188 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.458552 kubelet[2900]: W1124 01:45:56.458529 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.459159 kubelet[2900]: E1124 01:45:56.458975 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.460272 kubelet[2900]: E1124 01:45:56.460133 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.460595 kubelet[2900]: W1124 01:45:56.460437 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.460595 kubelet[2900]: E1124 01:45:56.460465 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.461517 kubelet[2900]: E1124 01:45:56.461498 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.461743 kubelet[2900]: W1124 01:45:56.461583 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.461743 kubelet[2900]: E1124 01:45:56.461603 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.465041 kubelet[2900]: E1124 01:45:56.464146 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.465041 kubelet[2900]: W1124 01:45:56.464181 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.465041 kubelet[2900]: E1124 01:45:56.464212 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.465041 kubelet[2900]: E1124 01:45:56.464806 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.465041 kubelet[2900]: W1124 01:45:56.464820 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.465041 kubelet[2900]: E1124 01:45:56.464835 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.465499 kubelet[2900]: E1124 01:45:56.465318 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.465499 kubelet[2900]: W1124 01:45:56.465332 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.465499 kubelet[2900]: E1124 01:45:56.465346 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.465859 kubelet[2900]: I1124 01:45:56.461891 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4c21c8f-271a-4e0d-ab8d-b3169fe61687-socket-dir\") pod \"csi-node-driver-dc98b\" (UID: \"d4c21c8f-271a-4e0d-ab8d-b3169fe61687\") " pod="calico-system/csi-node-driver-dc98b" Nov 24 01:45:56.466802 kubelet[2900]: E1124 01:45:56.466768 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.466802 kubelet[2900]: W1124 01:45:56.466791 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.466802 kubelet[2900]: E1124 01:45:56.466807 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.471847 kubelet[2900]: E1124 01:45:56.467439 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.471847 kubelet[2900]: W1124 01:45:56.467455 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.471847 kubelet[2900]: E1124 01:45:56.467472 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.471847 kubelet[2900]: E1124 01:45:56.468897 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.471847 kubelet[2900]: W1124 01:45:56.468911 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.471847 kubelet[2900]: E1124 01:45:56.468927 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.471847 kubelet[2900]: E1124 01:45:56.469438 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.471847 kubelet[2900]: W1124 01:45:56.469452 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.471847 kubelet[2900]: E1124 01:45:56.469467 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.526983 containerd[1582]: time="2025-11-24T01:45:56.526928872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7x5k,Uid:ee7b2794-1454-4f11-a2ec-f627b967e1da,Namespace:calico-system,Attempt:0,}" Nov 24 01:45:56.539481 containerd[1582]: time="2025-11-24T01:45:56.539419944Z" level=info msg="connecting to shim eb6ff040f2a3f29d3248575329669390eb1f1433c387b2224efc02324e61ea20" address="unix:///run/containerd/s/9799d41e43d78aada6f3a6286ca544f533b1a1691244a4609ec5ae2a0c18b4a8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:56.569656 kubelet[2900]: E1124 01:45:56.569578 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.569656 kubelet[2900]: W1124 01:45:56.569634 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.569656 kubelet[2900]: E1124 01:45:56.569666 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.570962 kubelet[2900]: E1124 01:45:56.570737 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.570962 kubelet[2900]: W1124 01:45:56.570752 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.570962 kubelet[2900]: E1124 01:45:56.570800 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.571503 kubelet[2900]: E1124 01:45:56.571401 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.571503 kubelet[2900]: W1124 01:45:56.571454 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.571503 kubelet[2900]: E1124 01:45:56.571471 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.572700 kubelet[2900]: E1124 01:45:56.571993 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.572700 kubelet[2900]: W1124 01:45:56.572012 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.573138 kubelet[2900]: E1124 01:45:56.573019 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.573503 kubelet[2900]: E1124 01:45:56.573476 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.573503 kubelet[2900]: W1124 01:45:56.573497 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.573884 kubelet[2900]: E1124 01:45:56.573514 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.573930 kubelet[2900]: E1124 01:45:56.573893 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.573930 kubelet[2900]: W1124 01:45:56.573907 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.573930 kubelet[2900]: E1124 01:45:56.573923 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.574332 kubelet[2900]: E1124 01:45:56.574277 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.574332 kubelet[2900]: W1124 01:45:56.574290 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.574821 kubelet[2900]: E1124 01:45:56.574344 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.574821 kubelet[2900]: E1124 01:45:56.574727 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.574821 kubelet[2900]: W1124 01:45:56.574765 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.574821 kubelet[2900]: E1124 01:45:56.574785 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.576787 kubelet[2900]: E1124 01:45:56.575428 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.576787 kubelet[2900]: W1124 01:45:56.575443 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.576787 kubelet[2900]: E1124 01:45:56.575458 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.576787 kubelet[2900]: E1124 01:45:56.575984 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.576787 kubelet[2900]: W1124 01:45:56.575998 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.576787 kubelet[2900]: E1124 01:45:56.576013 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.577037 kubelet[2900]: E1124 01:45:56.576809 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.577037 kubelet[2900]: W1124 01:45:56.576834 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.577037 kubelet[2900]: E1124 01:45:56.576849 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.578640 kubelet[2900]: E1124 01:45:56.577199 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.578640 kubelet[2900]: W1124 01:45:56.577677 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.578640 kubelet[2900]: E1124 01:45:56.577696 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.578640 kubelet[2900]: E1124 01:45:56.578010 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.578640 kubelet[2900]: W1124 01:45:56.578024 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.578640 kubelet[2900]: E1124 01:45:56.578038 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.578640 kubelet[2900]: E1124 01:45:56.578434 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.579337 kubelet[2900]: W1124 01:45:56.578448 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.579406 kubelet[2900]: E1124 01:45:56.579343 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.582659 kubelet[2900]: E1124 01:45:56.580841 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.582659 kubelet[2900]: W1124 01:45:56.580864 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.582659 kubelet[2900]: E1124 01:45:56.580881 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.582659 kubelet[2900]: E1124 01:45:56.581831 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.582659 kubelet[2900]: W1124 01:45:56.581846 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.582659 kubelet[2900]: E1124 01:45:56.581863 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.582659 kubelet[2900]: E1124 01:45:56.582176 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.582659 kubelet[2900]: W1124 01:45:56.582191 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.582659 kubelet[2900]: E1124 01:45:56.582206 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.590812 kubelet[2900]: E1124 01:45:56.590745 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.590812 kubelet[2900]: W1124 01:45:56.590783 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.590812 kubelet[2900]: E1124 01:45:56.590816 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.593974 kubelet[2900]: E1124 01:45:56.592794 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.593974 kubelet[2900]: W1124 01:45:56.592821 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.593974 kubelet[2900]: E1124 01:45:56.592842 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.595773 kubelet[2900]: E1124 01:45:56.595737 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.595773 kubelet[2900]: W1124 01:45:56.595768 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.595923 kubelet[2900]: E1124 01:45:56.595792 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.597461 kubelet[2900]: E1124 01:45:56.596388 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.597461 kubelet[2900]: W1124 01:45:56.596659 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.597461 kubelet[2900]: E1124 01:45:56.596693 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.598646 kubelet[2900]: E1124 01:45:56.598317 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.598646 kubelet[2900]: W1124 01:45:56.598341 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.598646 kubelet[2900]: E1124 01:45:56.598361 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.599555 kubelet[2900]: E1124 01:45:56.599217 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.599555 kubelet[2900]: W1124 01:45:56.599232 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.599555 kubelet[2900]: E1124 01:45:56.599248 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.601651 kubelet[2900]: E1124 01:45:56.600816 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.601651 kubelet[2900]: W1124 01:45:56.600842 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.601651 kubelet[2900]: E1124 01:45:56.600860 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.603768 kubelet[2900]: E1124 01:45:56.603730 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.603768 kubelet[2900]: W1124 01:45:56.603762 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.603945 kubelet[2900]: E1124 01:45:56.603790 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.615759 containerd[1582]: time="2025-11-24T01:45:56.614898542Z" level=info msg="connecting to shim dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31" address="unix:///run/containerd/s/f1b78de97cbc0f5581d56ab6ca9c6f05b4f832bc9bee786dcd73e3aa9aa2a9e4" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:45:56.656894 systemd[1]: Started cri-containerd-eb6ff040f2a3f29d3248575329669390eb1f1433c387b2224efc02324e61ea20.scope - libcontainer container eb6ff040f2a3f29d3248575329669390eb1f1433c387b2224efc02324e61ea20. Nov 24 01:45:56.668196 kubelet[2900]: E1124 01:45:56.668150 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 01:45:56.669110 kubelet[2900]: W1124 01:45:56.669075 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 01:45:56.671412 kubelet[2900]: E1124 01:45:56.671371 2900 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 01:45:56.707281 systemd[1]: Started cri-containerd-dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31.scope - libcontainer container dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31. Nov 24 01:45:56.859833 containerd[1582]: time="2025-11-24T01:45:56.859751079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7x5k,Uid:ee7b2794-1454-4f11-a2ec-f627b967e1da,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\"" Nov 24 01:45:56.864234 containerd[1582]: time="2025-11-24T01:45:56.864182533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 01:45:57.015825 containerd[1582]: time="2025-11-24T01:45:57.015768910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58cdbb7985-9g9nh,Uid:616f21a0-a1ad-423d-92ad-4cfc8ffc4f86,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb6ff040f2a3f29d3248575329669390eb1f1433c387b2224efc02324e61ea20\"" Nov 24 01:45:57.841381 kubelet[2900]: E1124 01:45:57.841313 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:45:58.484540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976483309.mount: Deactivated successfully. Nov 24 01:45:58.708640 containerd[1582]: time="2025-11-24T01:45:58.707746554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:58.710363 containerd[1582]: time="2025-11-24T01:45:58.710330641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 24 01:45:58.711165 containerd[1582]: time="2025-11-24T01:45:58.711131450Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:58.714748 containerd[1582]: time="2025-11-24T01:45:58.714713525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:45:58.717309 containerd[1582]: time="2025-11-24T01:45:58.716598270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.85232218s" Nov 24 01:45:58.717309 containerd[1582]: time="2025-11-24T01:45:58.716973326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 01:45:58.719342 containerd[1582]: time="2025-11-24T01:45:58.719306433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 01:45:58.724922 containerd[1582]: time="2025-11-24T01:45:58.724873692Z" level=info msg="CreateContainer within sandbox \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 01:45:58.739973 containerd[1582]: time="2025-11-24T01:45:58.739388072Z" level=info msg="Container 729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:45:58.749978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199250612.mount: Deactivated successfully. Nov 24 01:45:58.757270 containerd[1582]: time="2025-11-24T01:45:58.757224018Z" level=info msg="CreateContainer within sandbox \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c\"" Nov 24 01:45:58.760087 containerd[1582]: time="2025-11-24T01:45:58.758354530Z" level=info msg="StartContainer for \"729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c\"" Nov 24 01:45:58.762491 containerd[1582]: time="2025-11-24T01:45:58.762448872Z" level=info msg="connecting to shim 729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c" address="unix:///run/containerd/s/f1b78de97cbc0f5581d56ab6ca9c6f05b4f832bc9bee786dcd73e3aa9aa2a9e4" protocol=ttrpc version=3 Nov 24 01:45:58.810914 systemd[1]: Started cri-containerd-729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c.scope - libcontainer container 729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c. Nov 24 01:45:58.935313 containerd[1582]: time="2025-11-24T01:45:58.935266861Z" level=info msg="StartContainer for \"729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c\" returns successfully" Nov 24 01:45:58.961861 systemd[1]: cri-containerd-729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c.scope: Deactivated successfully. Nov 24 01:45:59.010117 containerd[1582]: time="2025-11-24T01:45:59.009933157Z" level=info msg="received container exit event container_id:\"729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c\" id:\"729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c\" pid:3504 exited_at:{seconds:1763948758 nanos:965564169}" Nov 24 01:45:59.408997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-729ea06fc69a25006024d8a81b816d13d009a9188203126753069f409ac4858c-rootfs.mount: Deactivated successfully. Nov 24 01:45:59.841737 kubelet[2900]: E1124 01:45:59.841560 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:01.842274 kubelet[2900]: E1124 01:46:01.842200 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:02.255536 containerd[1582]: time="2025-11-24T01:46:02.255100197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:02.258005 containerd[1582]: time="2025-11-24T01:46:02.257937885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 24 01:46:02.259938 containerd[1582]: time="2025-11-24T01:46:02.259575293Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:02.265146 containerd[1582]: time="2025-11-24T01:46:02.264795764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:02.267786 containerd[1582]: time="2025-11-24T01:46:02.267743367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.547989395s" Nov 24 01:46:02.267909 containerd[1582]: time="2025-11-24T01:46:02.267792924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 01:46:02.271899 containerd[1582]: time="2025-11-24T01:46:02.271657100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 01:46:02.322362 containerd[1582]: time="2025-11-24T01:46:02.322139441Z" level=info msg="CreateContainer within sandbox \"eb6ff040f2a3f29d3248575329669390eb1f1433c387b2224efc02324e61ea20\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 01:46:02.336507 containerd[1582]: time="2025-11-24T01:46:02.334979110Z" level=info msg="Container 69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:46:02.341880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706133098.mount: Deactivated successfully. Nov 24 01:46:02.360114 containerd[1582]: time="2025-11-24T01:46:02.360050164Z" level=info msg="CreateContainer within sandbox \"eb6ff040f2a3f29d3248575329669390eb1f1433c387b2224efc02324e61ea20\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da\"" Nov 24 01:46:02.361588 containerd[1582]: time="2025-11-24T01:46:02.361538383Z" level=info msg="StartContainer for \"69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da\"" Nov 24 01:46:02.363742 containerd[1582]: time="2025-11-24T01:46:02.363597263Z" level=info msg="connecting to shim 69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da" address="unix:///run/containerd/s/9799d41e43d78aada6f3a6286ca544f533b1a1691244a4609ec5ae2a0c18b4a8" protocol=ttrpc version=3 Nov 24 01:46:02.409039 systemd[1]: Started cri-containerd-69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da.scope - libcontainer container 69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da. Nov 24 01:46:02.495085 containerd[1582]: time="2025-11-24T01:46:02.494936791Z" level=info msg="StartContainer for \"69114b41604818c5d021c8c2a325b930c964ea070843eff5397e8b043d8c97da\" returns successfully" Nov 24 01:46:03.044466 kubelet[2900]: I1124 01:46:03.044120 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58cdbb7985-9g9nh" podStartSLOduration=1.7918768900000002 podStartE2EDuration="7.043879973s" podCreationTimestamp="2025-11-24 01:45:56 +0000 UTC" firstStartedPulling="2025-11-24 01:45:57.019057332 +0000 UTC m=+25.431845052" lastFinishedPulling="2025-11-24 01:46:02.271060407 +0000 UTC m=+30.683848135" observedRunningTime="2025-11-24 01:46:03.0437811 +0000 UTC m=+31.456568837" watchObservedRunningTime="2025-11-24 01:46:03.043879973 +0000 UTC m=+31.456667702" Nov 24 01:46:03.841194 kubelet[2900]: E1124 01:46:03.840337 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:05.847818 kubelet[2900]: E1124 01:46:05.847767 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:07.342671 containerd[1582]: time="2025-11-24T01:46:07.341540298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:07.356832 containerd[1582]: time="2025-11-24T01:46:07.342564338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 01:46:07.356832 containerd[1582]: time="2025-11-24T01:46:07.344387469Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:07.357690 containerd[1582]: time="2025-11-24T01:46:07.347443085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.075596646s" Nov 24 01:46:07.357690 containerd[1582]: time="2025-11-24T01:46:07.357573596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 01:46:07.358155 containerd[1582]: time="2025-11-24T01:46:07.358109207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:07.367914 containerd[1582]: time="2025-11-24T01:46:07.367862350Z" level=info msg="CreateContainer within sandbox \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 01:46:07.384652 containerd[1582]: time="2025-11-24T01:46:07.384163961Z" level=info msg="Container 03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:46:07.402299 containerd[1582]: time="2025-11-24T01:46:07.401486498Z" level=info msg="CreateContainer within sandbox \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70\"" Nov 24 01:46:07.403855 containerd[1582]: time="2025-11-24T01:46:07.403735021Z" level=info msg="StartContainer for \"03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70\"" Nov 24 01:46:07.408252 containerd[1582]: time="2025-11-24T01:46:07.408205107Z" level=info msg="connecting to shim 03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70" address="unix:///run/containerd/s/f1b78de97cbc0f5581d56ab6ca9c6f05b4f832bc9bee786dcd73e3aa9aa2a9e4" protocol=ttrpc version=3 Nov 24 01:46:07.446159 systemd[1]: Started cri-containerd-03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70.scope - libcontainer container 03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70. Nov 24 01:46:07.611690 containerd[1582]: time="2025-11-24T01:46:07.610466393Z" level=info msg="StartContainer for \"03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70\" returns successfully" Nov 24 01:46:07.840279 kubelet[2900]: E1124 01:46:07.840082 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:08.608086 systemd[1]: cri-containerd-03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70.scope: Deactivated successfully. Nov 24 01:46:08.610036 systemd[1]: cri-containerd-03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70.scope: Consumed 733ms CPU time, 163.5M memory peak, 8.9M read from disk, 171.3M written to disk. Nov 24 01:46:08.678798 kubelet[2900]: I1124 01:46:08.678527 2900 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 01:46:08.728875 containerd[1582]: time="2025-11-24T01:46:08.728810671Z" level=info msg="received container exit event container_id:\"03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70\" id:\"03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70\" pid:3603 exited_at:{seconds:1763948768 nanos:719346884}" Nov 24 01:46:08.774905 systemd[1]: Created slice kubepods-burstable-pode566338c_ba77_4549_8033_3f56c99af55d.slice - libcontainer container kubepods-burstable-pode566338c_ba77_4549_8033_3f56c99af55d.slice. Nov 24 01:46:08.781415 kubelet[2900]: I1124 01:46:08.780630 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e566338c-ba77-4549-8033-3f56c99af55d-config-volume\") pod \"coredns-674b8bbfcf-ns8j8\" (UID: \"e566338c-ba77-4549-8033-3f56c99af55d\") " pod="kube-system/coredns-674b8bbfcf-ns8j8" Nov 24 01:46:08.781415 kubelet[2900]: I1124 01:46:08.780699 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxq6\" (UniqueName: \"kubernetes.io/projected/e566338c-ba77-4549-8033-3f56c99af55d-kube-api-access-vcxq6\") pod \"coredns-674b8bbfcf-ns8j8\" (UID: \"e566338c-ba77-4549-8033-3f56c99af55d\") " pod="kube-system/coredns-674b8bbfcf-ns8j8" Nov 24 01:46:08.809022 systemd[1]: Created slice kubepods-besteffort-podbda40ac0_fe5e_4a6c_924c_3a5c697eb0d1.slice - libcontainer container kubepods-besteffort-podbda40ac0_fe5e_4a6c_924c_3a5c697eb0d1.slice. Nov 24 01:46:08.843556 systemd[1]: Created slice kubepods-besteffort-pod621aa00e_6d25_484a_b356_0b520628e4b2.slice - libcontainer container kubepods-besteffort-pod621aa00e_6d25_484a_b356_0b520628e4b2.slice. Nov 24 01:46:08.864523 systemd[1]: Created slice kubepods-besteffort-podaf4627b6_c7f1_489e_8935_b4a50923c295.slice - libcontainer container kubepods-besteffort-podaf4627b6_c7f1_489e_8935_b4a50923c295.slice. Nov 24 01:46:08.885778 kubelet[2900]: I1124 01:46:08.884600 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5debaa4f-5f0a-45c9-bc91-84f4de6609a5-calico-apiserver-certs\") pod \"calico-apiserver-687d5b8b8f-7nccl\" (UID: \"5debaa4f-5f0a-45c9-bc91-84f4de6609a5\") " pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" Nov 24 01:46:08.885778 kubelet[2900]: I1124 01:46:08.885577 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvbw\" (UniqueName: \"kubernetes.io/projected/5debaa4f-5f0a-45c9-bc91-84f4de6609a5-kube-api-access-9vvbw\") pod \"calico-apiserver-687d5b8b8f-7nccl\" (UID: \"5debaa4f-5f0a-45c9-bc91-84f4de6609a5\") " pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" Nov 24 01:46:08.885778 kubelet[2900]: I1124 01:46:08.885669 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1-config\") pod \"goldmane-666569f655-k9g82\" (UID: \"bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1\") " pod="calico-system/goldmane-666569f655-k9g82" Nov 24 01:46:08.885778 kubelet[2900]: I1124 01:46:08.885726 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhpzf\" (UniqueName: \"kubernetes.io/projected/bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1-kube-api-access-zhpzf\") pod \"goldmane-666569f655-k9g82\" (UID: \"bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1\") " pod="calico-system/goldmane-666569f655-k9g82" Nov 24 01:46:08.889200 kubelet[2900]: I1124 01:46:08.885753 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8mnd\" (UniqueName: \"kubernetes.io/projected/621aa00e-6d25-484a-b356-0b520628e4b2-kube-api-access-x8mnd\") pod \"calico-kube-controllers-6f5d79d7cd-lvqjg\" (UID: \"621aa00e-6d25-484a-b356-0b520628e4b2\") " pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" Nov 24 01:46:08.889200 kubelet[2900]: I1124 01:46:08.888756 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jn5v\" (UniqueName: \"kubernetes.io/projected/dabb5be9-db72-477b-91d5-84b55db3018a-kube-api-access-4jn5v\") pod \"whisker-87b79d49b-4kkv5\" (UID: \"dabb5be9-db72-477b-91d5-84b55db3018a\") " pod="calico-system/whisker-87b79d49b-4kkv5" Nov 24 01:46:08.889959 kubelet[2900]: I1124 01:46:08.889387 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/621aa00e-6d25-484a-b356-0b520628e4b2-tigera-ca-bundle\") pod \"calico-kube-controllers-6f5d79d7cd-lvqjg\" (UID: \"621aa00e-6d25-484a-b356-0b520628e4b2\") " pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" Nov 24 01:46:08.889959 kubelet[2900]: I1124 01:46:08.889795 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-backend-key-pair\") pod \"whisker-87b79d49b-4kkv5\" (UID: \"dabb5be9-db72-477b-91d5-84b55db3018a\") " pod="calico-system/whisker-87b79d49b-4kkv5" Nov 24 01:46:08.890154 kubelet[2900]: I1124 01:46:08.890118 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-ca-bundle\") pod \"whisker-87b79d49b-4kkv5\" (UID: \"dabb5be9-db72-477b-91d5-84b55db3018a\") " pod="calico-system/whisker-87b79d49b-4kkv5" Nov 24 01:46:08.890391 kubelet[2900]: I1124 01:46:08.890284 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pn4r\" (UniqueName: \"kubernetes.io/projected/cff573f7-4f9e-4dab-b705-de47197417fc-kube-api-access-9pn4r\") pod \"coredns-674b8bbfcf-rn29s\" (UID: \"cff573f7-4f9e-4dab-b705-de47197417fc\") " pod="kube-system/coredns-674b8bbfcf-rn29s" Nov 24 01:46:08.890800 kubelet[2900]: I1124 01:46:08.890607 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff573f7-4f9e-4dab-b705-de47197417fc-config-volume\") pod \"coredns-674b8bbfcf-rn29s\" (UID: \"cff573f7-4f9e-4dab-b705-de47197417fc\") " pod="kube-system/coredns-674b8bbfcf-rn29s" Nov 24 01:46:08.893424 kubelet[2900]: I1124 01:46:08.893268 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af4627b6-c7f1-489e-8935-b4a50923c295-calico-apiserver-certs\") pod \"calico-apiserver-687d5b8b8f-jvhv5\" (UID: \"af4627b6-c7f1-489e-8935-b4a50923c295\") " pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" Nov 24 01:46:08.893424 kubelet[2900]: I1124 01:46:08.893380 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1-goldmane-ca-bundle\") pod \"goldmane-666569f655-k9g82\" (UID: \"bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1\") " pod="calico-system/goldmane-666569f655-k9g82" Nov 24 01:46:08.894027 kubelet[2900]: I1124 01:46:08.893672 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1-goldmane-key-pair\") pod \"goldmane-666569f655-k9g82\" (UID: \"bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1\") " pod="calico-system/goldmane-666569f655-k9g82" Nov 24 01:46:08.897035 kubelet[2900]: I1124 01:46:08.896746 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c897p\" (UniqueName: \"kubernetes.io/projected/af4627b6-c7f1-489e-8935-b4a50923c295-kube-api-access-c897p\") pod \"calico-apiserver-687d5b8b8f-jvhv5\" (UID: \"af4627b6-c7f1-489e-8935-b4a50923c295\") " pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" Nov 24 01:46:08.897725 systemd[1]: Created slice kubepods-burstable-podcff573f7_4f9e_4dab_b705_de47197417fc.slice - libcontainer container kubepods-burstable-podcff573f7_4f9e_4dab_b705_de47197417fc.slice. Nov 24 01:46:08.937946 systemd[1]: Created slice kubepods-besteffort-poddabb5be9_db72_477b_91d5_84b55db3018a.slice - libcontainer container kubepods-besteffort-poddabb5be9_db72_477b_91d5_84b55db3018a.slice. Nov 24 01:46:08.953527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03e3036fb05d1cd9fe326cbc1104ad1a36981e42fee53af493be6c85dd299e70-rootfs.mount: Deactivated successfully. Nov 24 01:46:08.966100 systemd[1]: Created slice kubepods-besteffort-pod5debaa4f_5f0a_45c9_bc91_84f4de6609a5.slice - libcontainer container kubepods-besteffort-pod5debaa4f_5f0a_45c9_bc91_84f4de6609a5.slice. Nov 24 01:46:09.090992 containerd[1582]: time="2025-11-24T01:46:09.090943679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 01:46:09.095626 containerd[1582]: time="2025-11-24T01:46:09.095573846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ns8j8,Uid:e566338c-ba77-4549-8033-3f56c99af55d,Namespace:kube-system,Attempt:0,}" Nov 24 01:46:09.119168 containerd[1582]: time="2025-11-24T01:46:09.118788993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9g82,Uid:bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:09.156926 containerd[1582]: time="2025-11-24T01:46:09.156463181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5d79d7cd-lvqjg,Uid:621aa00e-6d25-484a-b356-0b520628e4b2,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:09.199605 containerd[1582]: time="2025-11-24T01:46:09.199539898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-jvhv5,Uid:af4627b6-c7f1-489e-8935-b4a50923c295,Namespace:calico-apiserver,Attempt:0,}" Nov 24 01:46:09.218656 containerd[1582]: time="2025-11-24T01:46:09.218241541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rn29s,Uid:cff573f7-4f9e-4dab-b705-de47197417fc,Namespace:kube-system,Attempt:0,}" Nov 24 01:46:09.276768 containerd[1582]: time="2025-11-24T01:46:09.276716060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87b79d49b-4kkv5,Uid:dabb5be9-db72-477b-91d5-84b55db3018a,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:09.307901 containerd[1582]: time="2025-11-24T01:46:09.307850401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-7nccl,Uid:5debaa4f-5f0a-45c9-bc91-84f4de6609a5,Namespace:calico-apiserver,Attempt:0,}" Nov 24 01:46:09.528437 containerd[1582]: time="2025-11-24T01:46:09.528366820Z" level=error msg="Failed to destroy network for sandbox \"ae1a67965dccc79e4adfffd90422a0de26479d4c8893d2d275dc4706fbb7e7d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.530896 containerd[1582]: time="2025-11-24T01:46:09.530768057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rn29s,Uid:cff573f7-4f9e-4dab-b705-de47197417fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1a67965dccc79e4adfffd90422a0de26479d4c8893d2d275dc4706fbb7e7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.531673 kubelet[2900]: E1124 01:46:09.531529 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1a67965dccc79e4adfffd90422a0de26479d4c8893d2d275dc4706fbb7e7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.532322 kubelet[2900]: E1124 01:46:09.531935 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1a67965dccc79e4adfffd90422a0de26479d4c8893d2d275dc4706fbb7e7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rn29s" Nov 24 01:46:09.532322 kubelet[2900]: E1124 01:46:09.531985 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1a67965dccc79e4adfffd90422a0de26479d4c8893d2d275dc4706fbb7e7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rn29s" Nov 24 01:46:09.532322 kubelet[2900]: E1124 01:46:09.532073 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rn29s_kube-system(cff573f7-4f9e-4dab-b705-de47197417fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rn29s_kube-system(cff573f7-4f9e-4dab-b705-de47197417fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae1a67965dccc79e4adfffd90422a0de26479d4c8893d2d275dc4706fbb7e7d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rn29s" podUID="cff573f7-4f9e-4dab-b705-de47197417fc" Nov 24 01:46:09.537954 containerd[1582]: time="2025-11-24T01:46:09.536706089Z" level=error msg="Failed to destroy network for sandbox \"1716a360874331cb87622159e736e4e2a7181253e3d7e807f82a210f0d3e071e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.539699 containerd[1582]: time="2025-11-24T01:46:09.539650297Z" level=error msg="Failed to destroy network for sandbox \"191649b2c0d961eb4459f20df5e510e96dc3b9abedc63634a7bfb449fc32157f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.542429 containerd[1582]: time="2025-11-24T01:46:09.542346340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9g82,Uid:bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1716a360874331cb87622159e736e4e2a7181253e3d7e807f82a210f0d3e071e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.549025 containerd[1582]: time="2025-11-24T01:46:09.548714358Z" level=error msg="Failed to destroy network for sandbox \"fff0227320d6a341973b9d0b0d729fdb3be311626715b0f3f71968b9cc3fdaaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.549357 kubelet[2900]: E1124 01:46:09.549303 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1716a360874331cb87622159e736e4e2a7181253e3d7e807f82a210f0d3e071e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.549454 kubelet[2900]: E1124 01:46:09.549392 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1716a360874331cb87622159e736e4e2a7181253e3d7e807f82a210f0d3e071e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k9g82" Nov 24 01:46:09.549454 kubelet[2900]: E1124 01:46:09.549423 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1716a360874331cb87622159e736e4e2a7181253e3d7e807f82a210f0d3e071e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k9g82" Nov 24 01:46:09.549610 kubelet[2900]: E1124 01:46:09.549508 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-k9g82_calico-system(bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-k9g82_calico-system(bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1716a360874331cb87622159e736e4e2a7181253e3d7e807f82a210f0d3e071e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:46:09.552326 containerd[1582]: time="2025-11-24T01:46:09.551879057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-jvhv5,Uid:af4627b6-c7f1-489e-8935-b4a50923c295,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"191649b2c0d961eb4459f20df5e510e96dc3b9abedc63634a7bfb449fc32157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.554669 kubelet[2900]: E1124 01:46:09.554125 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"191649b2c0d961eb4459f20df5e510e96dc3b9abedc63634a7bfb449fc32157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.554669 kubelet[2900]: E1124 01:46:09.554218 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"191649b2c0d961eb4459f20df5e510e96dc3b9abedc63634a7bfb449fc32157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" Nov 24 01:46:09.554669 kubelet[2900]: E1124 01:46:09.554259 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"191649b2c0d961eb4459f20df5e510e96dc3b9abedc63634a7bfb449fc32157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" Nov 24 01:46:09.554885 kubelet[2900]: E1124 01:46:09.554322 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-687d5b8b8f-jvhv5_calico-apiserver(af4627b6-c7f1-489e-8935-b4a50923c295)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-687d5b8b8f-jvhv5_calico-apiserver(af4627b6-c7f1-489e-8935-b4a50923c295)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"191649b2c0d961eb4459f20df5e510e96dc3b9abedc63634a7bfb449fc32157f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:46:09.556899 containerd[1582]: time="2025-11-24T01:46:09.556824117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5d79d7cd-lvqjg,Uid:621aa00e-6d25-484a-b356-0b520628e4b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff0227320d6a341973b9d0b0d729fdb3be311626715b0f3f71968b9cc3fdaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.557572 containerd[1582]: time="2025-11-24T01:46:09.557080782Z" level=error msg="Failed to destroy network for sandbox \"441eba8dace723c55091d3e50af1110d91d36763f88d98fad6bccbcc4bbe698a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.559066 kubelet[2900]: E1124 01:46:09.557270 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff0227320d6a341973b9d0b0d729fdb3be311626715b0f3f71968b9cc3fdaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.559066 kubelet[2900]: E1124 01:46:09.557809 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff0227320d6a341973b9d0b0d729fdb3be311626715b0f3f71968b9cc3fdaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" Nov 24 01:46:09.559066 kubelet[2900]: E1124 01:46:09.557864 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff0227320d6a341973b9d0b0d729fdb3be311626715b0f3f71968b9cc3fdaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" Nov 24 01:46:09.559664 kubelet[2900]: E1124 01:46:09.557941 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fff0227320d6a341973b9d0b0d729fdb3be311626715b0f3f71968b9cc3fdaaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:46:09.559664 kubelet[2900]: E1124 01:46:09.559351 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"441eba8dace723c55091d3e50af1110d91d36763f88d98fad6bccbcc4bbe698a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.559664 kubelet[2900]: E1124 01:46:09.559404 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"441eba8dace723c55091d3e50af1110d91d36763f88d98fad6bccbcc4bbe698a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ns8j8" Nov 24 01:46:09.559906 containerd[1582]: time="2025-11-24T01:46:09.559101505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ns8j8,Uid:e566338c-ba77-4549-8033-3f56c99af55d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"441eba8dace723c55091d3e50af1110d91d36763f88d98fad6bccbcc4bbe698a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.559994 kubelet[2900]: E1124 01:46:09.559447 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"441eba8dace723c55091d3e50af1110d91d36763f88d98fad6bccbcc4bbe698a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ns8j8" Nov 24 01:46:09.559994 kubelet[2900]: E1124 01:46:09.559500 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ns8j8_kube-system(e566338c-ba77-4549-8033-3f56c99af55d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ns8j8_kube-system(e566338c-ba77-4549-8033-3f56c99af55d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"441eba8dace723c55091d3e50af1110d91d36763f88d98fad6bccbcc4bbe698a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ns8j8" podUID="e566338c-ba77-4549-8033-3f56c99af55d" Nov 24 01:46:09.580486 containerd[1582]: time="2025-11-24T01:46:09.580404409Z" level=error msg="Failed to destroy network for sandbox \"3eb9a2400b0c2bc789632dc96d7ebbf9a571b4f6ba4e2c2b46e46bfd402113cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.583017 containerd[1582]: time="2025-11-24T01:46:09.582740997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87b79d49b-4kkv5,Uid:dabb5be9-db72-477b-91d5-84b55db3018a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb9a2400b0c2bc789632dc96d7ebbf9a571b4f6ba4e2c2b46e46bfd402113cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.586339 kubelet[2900]: E1124 01:46:09.583017 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb9a2400b0c2bc789632dc96d7ebbf9a571b4f6ba4e2c2b46e46bfd402113cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.586339 kubelet[2900]: E1124 01:46:09.583082 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb9a2400b0c2bc789632dc96d7ebbf9a571b4f6ba4e2c2b46e46bfd402113cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-87b79d49b-4kkv5" Nov 24 01:46:09.586339 kubelet[2900]: E1124 01:46:09.583114 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eb9a2400b0c2bc789632dc96d7ebbf9a571b4f6ba4e2c2b46e46bfd402113cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-87b79d49b-4kkv5" Nov 24 01:46:09.586591 kubelet[2900]: E1124 01:46:09.583250 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-87b79d49b-4kkv5_calico-system(dabb5be9-db72-477b-91d5-84b55db3018a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-87b79d49b-4kkv5_calico-system(dabb5be9-db72-477b-91d5-84b55db3018a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3eb9a2400b0c2bc789632dc96d7ebbf9a571b4f6ba4e2c2b46e46bfd402113cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-87b79d49b-4kkv5" podUID="dabb5be9-db72-477b-91d5-84b55db3018a" Nov 24 01:46:09.600897 containerd[1582]: time="2025-11-24T01:46:09.600833009Z" level=error msg="Failed to destroy network for sandbox \"d9a0d8709dd9aa0d1f7de2d0fc01154d4093fe25d4ac003aa42e75787a0e07da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.603852 containerd[1582]: time="2025-11-24T01:46:09.603780906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-7nccl,Uid:5debaa4f-5f0a-45c9-bc91-84f4de6609a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a0d8709dd9aa0d1f7de2d0fc01154d4093fe25d4ac003aa42e75787a0e07da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.604221 kubelet[2900]: E1124 01:46:09.604155 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a0d8709dd9aa0d1f7de2d0fc01154d4093fe25d4ac003aa42e75787a0e07da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:09.604302 kubelet[2900]: E1124 01:46:09.604250 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a0d8709dd9aa0d1f7de2d0fc01154d4093fe25d4ac003aa42e75787a0e07da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" Nov 24 01:46:09.604302 kubelet[2900]: E1124 01:46:09.604281 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a0d8709dd9aa0d1f7de2d0fc01154d4093fe25d4ac003aa42e75787a0e07da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" Nov 24 01:46:09.604411 kubelet[2900]: E1124 01:46:09.604360 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-687d5b8b8f-7nccl_calico-apiserver(5debaa4f-5f0a-45c9-bc91-84f4de6609a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-687d5b8b8f-7nccl_calico-apiserver(5debaa4f-5f0a-45c9-bc91-84f4de6609a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9a0d8709dd9aa0d1f7de2d0fc01154d4093fe25d4ac003aa42e75787a0e07da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:46:09.858968 systemd[1]: Created slice kubepods-besteffort-podd4c21c8f_271a_4e0d_ab8d_b3169fe61687.slice - libcontainer container kubepods-besteffort-podd4c21c8f_271a_4e0d_ab8d_b3169fe61687.slice. Nov 24 01:46:09.870455 containerd[1582]: time="2025-11-24T01:46:09.866994713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc98b,Uid:d4c21c8f-271a-4e0d-ab8d-b3169fe61687,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:09.997059 containerd[1582]: time="2025-11-24T01:46:09.996966337Z" level=error msg="Failed to destroy network for sandbox \"1754b951528b439c703c02ea6b52a3f15ac95adf285f1a4cefdbab25310e9f62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:10.001638 systemd[1]: run-netns-cni\x2de060e871\x2dd2bd\x2d0633\x2d7e0e\x2d9704d42531c1.mount: Deactivated successfully. Nov 24 01:46:10.002680 containerd[1582]: time="2025-11-24T01:46:10.002293868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc98b,Uid:d4c21c8f-271a-4e0d-ab8d-b3169fe61687,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1754b951528b439c703c02ea6b52a3f15ac95adf285f1a4cefdbab25310e9f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:10.003082 kubelet[2900]: E1124 01:46:10.003010 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1754b951528b439c703c02ea6b52a3f15ac95adf285f1a4cefdbab25310e9f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:10.004312 kubelet[2900]: E1124 01:46:10.003463 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1754b951528b439c703c02ea6b52a3f15ac95adf285f1a4cefdbab25310e9f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dc98b" Nov 24 01:46:10.004312 kubelet[2900]: E1124 01:46:10.003541 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1754b951528b439c703c02ea6b52a3f15ac95adf285f1a4cefdbab25310e9f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dc98b" Nov 24 01:46:10.004312 kubelet[2900]: E1124 01:46:10.004018 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1754b951528b439c703c02ea6b52a3f15ac95adf285f1a4cefdbab25310e9f62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:19.428971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457279441.mount: Deactivated successfully. Nov 24 01:46:19.563846 containerd[1582]: time="2025-11-24T01:46:19.519276423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 01:46:19.610048 containerd[1582]: time="2025-11-24T01:46:19.609821724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:19.679288 containerd[1582]: time="2025-11-24T01:46:19.679167668Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:19.681821 containerd[1582]: time="2025-11-24T01:46:19.680826057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 01:46:19.683305 containerd[1582]: time="2025-11-24T01:46:19.683237527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.589256181s" Nov 24 01:46:19.683305 containerd[1582]: time="2025-11-24T01:46:19.683306348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 01:46:19.800712 containerd[1582]: time="2025-11-24T01:46:19.800661529Z" level=info msg="CreateContainer within sandbox \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 01:46:19.874244 containerd[1582]: time="2025-11-24T01:46:19.874181623Z" level=info msg="Container e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:46:19.880919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998607618.mount: Deactivated successfully. Nov 24 01:46:19.889379 containerd[1582]: time="2025-11-24T01:46:19.889310858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5d79d7cd-lvqjg,Uid:621aa00e-6d25-484a-b356-0b520628e4b2,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:19.935921 containerd[1582]: time="2025-11-24T01:46:19.935592760Z" level=info msg="CreateContainer within sandbox \"dc263f4a51d670c022b564f4e0aa05e60d32e8adb1afd3053ccac67abd67fd31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197\"" Nov 24 01:46:19.942001 containerd[1582]: time="2025-11-24T01:46:19.941822007Z" level=info msg="StartContainer for \"e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197\"" Nov 24 01:46:19.963487 containerd[1582]: time="2025-11-24T01:46:19.963422088Z" level=info msg="connecting to shim e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197" address="unix:///run/containerd/s/f1b78de97cbc0f5581d56ab6ca9c6f05b4f832bc9bee786dcd73e3aa9aa2a9e4" protocol=ttrpc version=3 Nov 24 01:46:20.032690 containerd[1582]: time="2025-11-24T01:46:20.032154683Z" level=error msg="Failed to destroy network for sandbox \"2a89986e5e202af9ff16ade535071904e181570f724b52a0a6e91fbfbfa2aa93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:20.034206 containerd[1582]: time="2025-11-24T01:46:20.034148497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5d79d7cd-lvqjg,Uid:621aa00e-6d25-484a-b356-0b520628e4b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a89986e5e202af9ff16ade535071904e181570f724b52a0a6e91fbfbfa2aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:20.043693 kubelet[2900]: E1124 01:46:20.042101 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a89986e5e202af9ff16ade535071904e181570f724b52a0a6e91fbfbfa2aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 01:46:20.043693 kubelet[2900]: E1124 01:46:20.042228 2900 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a89986e5e202af9ff16ade535071904e181570f724b52a0a6e91fbfbfa2aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" Nov 24 01:46:20.043693 kubelet[2900]: E1124 01:46:20.042261 2900 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a89986e5e202af9ff16ade535071904e181570f724b52a0a6e91fbfbfa2aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" Nov 24 01:46:20.045219 kubelet[2900]: E1124 01:46:20.042361 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a89986e5e202af9ff16ade535071904e181570f724b52a0a6e91fbfbfa2aa93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:46:20.155043 systemd[1]: Started cri-containerd-e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197.scope - libcontainer container e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197. Nov 24 01:46:20.274162 containerd[1582]: time="2025-11-24T01:46:20.273048459Z" level=info msg="StartContainer for \"e1874679f2b56e93b4bd672d3c86907781317563c9fe09ca84d05eafc93a4197\" returns successfully" Nov 24 01:46:20.619038 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 01:46:20.622109 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 01:46:20.842578 containerd[1582]: time="2025-11-24T01:46:20.842523730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87b79d49b-4kkv5,Uid:dabb5be9-db72-477b-91d5-84b55db3018a,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:20.844727 containerd[1582]: time="2025-11-24T01:46:20.843087139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ns8j8,Uid:e566338c-ba77-4549-8033-3f56c99af55d,Namespace:kube-system,Attempt:0,}" Nov 24 01:46:21.782532 systemd-networkd[1484]: calie7980131c1e: Link UP Nov 24 01:46:21.786095 systemd-networkd[1484]: calie7980131c1e: Gained carrier Nov 24 01:46:21.831744 containerd[1582]: 2025-11-24 01:46:21.009 [INFO][3951] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:21.831744 containerd[1582]: 2025-11-24 01:46:21.091 [INFO][3951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0 coredns-674b8bbfcf- kube-system e566338c-ba77-4549-8033-3f56c99af55d 854 0 2025-11-24 01:45:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com coredns-674b8bbfcf-ns8j8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie7980131c1e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-" Nov 24 01:46:21.831744 containerd[1582]: 2025-11-24 01:46:21.091 [INFO][3951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.831744 containerd[1582]: 2025-11-24 01:46:21.583 [INFO][3973] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" HandleID="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Workload="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.584 [INFO][3973] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" HandleID="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Workload="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001039d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-ns8j8", "timestamp":"2025-11-24 01:46:21.583407015 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.584 [INFO][3973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.585 [INFO][3973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.586 [INFO][3973] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.615 [INFO][3973] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.644 [INFO][3973] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.664 [INFO][3973] ipam/ipam.go 543: Ran out of existing affine blocks for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.668 [INFO][3973] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.676 [INFO][3973] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.15.64/26 Nov 24 01:46:21.833274 containerd[1582]: 2025-11-24 01:46:21.676 [INFO][3973] ipam/ipam.go 572: Found unclaimed block host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.676 [INFO][3973] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.683 [INFO][3973] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.683 [INFO][3973] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.688 [INFO][3973] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.695 [INFO][3973] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.698 [INFO][3973] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.698 [INFO][3973] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.710 [INFO][3973] ipam/ipam_block_reader_writer.go 267: Successfully created block Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.710 [INFO][3973] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.719 [INFO][3973] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.719 [INFO][3973] ipam/ipam.go 607: Block '192.168.15.64/26' has 64 free ips which is more than 1 ips required. host="srv-7vvyr.gb1.brightbox.com" subnet=192.168.15.64/26 Nov 24 01:46:21.836935 containerd[1582]: 2025-11-24 01:46:21.719 [INFO][3973] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.838975 containerd[1582]: 2025-11-24 01:46:21.722 [INFO][3973] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482 Nov 24 01:46:21.838975 containerd[1582]: 2025-11-24 01:46:21.734 [INFO][3973] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.838975 containerd[1582]: 2025-11-24 01:46:21.744 [INFO][3973] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.64/26] block=192.168.15.64/26 handle="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.838975 containerd[1582]: 2025-11-24 01:46:21.744 [INFO][3973] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.64/26] handle="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:21.838975 containerd[1582]: 2025-11-24 01:46:21.744 [INFO][3973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:21.838975 containerd[1582]: 2025-11-24 01:46:21.744 [INFO][3973] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.64/26] IPv6=[] ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" HandleID="k8s-pod-network.490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Workload="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.839296 containerd[1582]: 2025-11-24 01:46:21.751 [INFO][3951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e566338c-ba77-4549-8033-3f56c99af55d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-ns8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7980131c1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:21.839296 containerd[1582]: 2025-11-24 01:46:21.751 [INFO][3951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.64/32] ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.839296 containerd[1582]: 2025-11-24 01:46:21.751 [INFO][3951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7980131c1e ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.839296 containerd[1582]: 2025-11-24 01:46:21.788 [INFO][3951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.839296 containerd[1582]: 2025-11-24 01:46:21.789 [INFO][3951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e566338c-ba77-4549-8033-3f56c99af55d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482", Pod:"coredns-674b8bbfcf-ns8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7980131c1e", MAC:"7a:bf:2b:69:a8:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:21.839296 containerd[1582]: 2025-11-24 01:46:21.811 [INFO][3951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" Namespace="kube-system" Pod="coredns-674b8bbfcf-ns8j8" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--ns8j8-eth0" Nov 24 01:46:21.881570 kubelet[2900]: I1124 01:46:21.808921 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n7x5k" podStartSLOduration=2.948713452 podStartE2EDuration="25.808646395s" podCreationTimestamp="2025-11-24 01:45:56 +0000 UTC" firstStartedPulling="2025-11-24 01:45:56.863701516 +0000 UTC m=+25.276489236" lastFinishedPulling="2025-11-24 01:46:19.723634453 +0000 UTC m=+48.136422179" observedRunningTime="2025-11-24 01:46:21.317436259 +0000 UTC m=+49.730223991" watchObservedRunningTime="2025-11-24 01:46:21.808646395 +0000 UTC m=+50.221434128" Nov 24 01:46:21.883955 containerd[1582]: time="2025-11-24T01:46:21.883899849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9g82,Uid:bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:21.900289 containerd[1582]: time="2025-11-24T01:46:21.900227238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc98b,Uid:d4c21c8f-271a-4e0d-ab8d-b3169fe61687,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:21.991217 systemd-networkd[1484]: cali2d7b0705935: Link UP Nov 24 01:46:22.004794 systemd-networkd[1484]: cali2d7b0705935: Gained carrier Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.012 [INFO][3947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.084 [INFO][3947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0 whisker-87b79d49b- calico-system dabb5be9-db72-477b-91d5-84b55db3018a 913 0 2025-11-24 01:46:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:87b79d49b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com whisker-87b79d49b-4kkv5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d7b0705935 [] [] }} ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.085 [INFO][3947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.583 [INFO][3970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.585 [INFO][3970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103f10), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"whisker-87b79d49b-4kkv5", "timestamp":"2025-11-24 01:46:21.583408173 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.585 [INFO][3970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.744 [INFO][3970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.745 [INFO][3970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.761 [INFO][3970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.812 [INFO][3970] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.844 [INFO][3970] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.852 [INFO][3970] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.858 [INFO][3970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.859 [INFO][3970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.868 [INFO][3970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7 Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.883 [INFO][3970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.919 [INFO][3970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.65/26] block=192.168.15.64/26 handle="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.920 [INFO][3970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.65/26] handle="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.920 [INFO][3970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:22.066649 containerd[1582]: 2025-11-24 01:46:21.920 [INFO][3970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.65/26] IPv6=[] ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.067640 containerd[1582]: 2025-11-24 01:46:21.950 [INFO][3947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0", GenerateName:"whisker-87b79d49b-", Namespace:"calico-system", SelfLink:"", UID:"dabb5be9-db72-477b-91d5-84b55db3018a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"87b79d49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"whisker-87b79d49b-4kkv5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d7b0705935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:22.067640 containerd[1582]: 2025-11-24 01:46:21.951 [INFO][3947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.65/32] ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.067640 containerd[1582]: 2025-11-24 01:46:21.951 [INFO][3947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d7b0705935 ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.067640 containerd[1582]: 2025-11-24 01:46:22.011 [INFO][3947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.067640 containerd[1582]: 2025-11-24 01:46:22.013 [INFO][3947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0", GenerateName:"whisker-87b79d49b-", Namespace:"calico-system", SelfLink:"", UID:"dabb5be9-db72-477b-91d5-84b55db3018a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"87b79d49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7", Pod:"whisker-87b79d49b-4kkv5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d7b0705935", MAC:"1e:78:d8:78:13:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:22.067640 containerd[1582]: 2025-11-24 01:46:22.055 [INFO][3947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Namespace="calico-system" Pod="whisker-87b79d49b-4kkv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:22.409120 containerd[1582]: time="2025-11-24T01:46:22.409060649Z" level=info msg="connecting to shim ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" address="unix:///run/containerd/s/1ed3e49dc3d061498a873e0eeb73ab064b1d101ba40935151d3d4522f7b0d8d6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:22.426232 containerd[1582]: time="2025-11-24T01:46:22.425488406Z" level=info msg="connecting to shim 490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482" address="unix:///run/containerd/s/3a7143091bd1574b37de3c77652e782c8ba7c4ba1743b949ad3d0a3e5974855a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:22.570142 systemd[1]: Started cri-containerd-ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7.scope - libcontainer container ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7. Nov 24 01:46:22.606850 systemd[1]: Started cri-containerd-490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482.scope - libcontainer container 490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482. Nov 24 01:46:22.674209 systemd-networkd[1484]: cali4e83083da5a: Link UP Nov 24 01:46:22.685399 systemd-networkd[1484]: cali4e83083da5a: Gained carrier Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.136 [INFO][4030] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.169 [INFO][4030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0 csi-node-driver- calico-system d4c21c8f-271a-4e0d-ab8d-b3169fe61687 743 0 2025-11-24 01:45:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com csi-node-driver-dc98b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e83083da5a [] [] }} ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.171 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.504 [INFO][4054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" HandleID="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Workload="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.510 [INFO][4054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" HandleID="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Workload="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380120), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"csi-node-driver-dc98b", "timestamp":"2025-11-24 01:46:22.503072742 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.510 [INFO][4054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.510 [INFO][4054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.510 [INFO][4054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.531 [INFO][4054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.551 [INFO][4054] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.581 [INFO][4054] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.586 [INFO][4054] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.594 [INFO][4054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.595 [INFO][4054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.602 [INFO][4054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2 Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.618 [INFO][4054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.640 [INFO][4054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.67/26] block=192.168.15.64/26 handle="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.640 [INFO][4054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.67/26] handle="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.642 [INFO][4054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:22.729644 containerd[1582]: 2025-11-24 01:46:22.642 [INFO][4054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.67/26] IPv6=[] ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" HandleID="k8s-pod-network.257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Workload="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.730722 containerd[1582]: 2025-11-24 01:46:22.647 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4c21c8f-271a-4e0d-ab8d-b3169fe61687", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-dc98b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e83083da5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:22.730722 containerd[1582]: 2025-11-24 01:46:22.648 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.67/32] ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.730722 containerd[1582]: 2025-11-24 01:46:22.648 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e83083da5a ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.730722 containerd[1582]: 2025-11-24 01:46:22.690 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.730722 containerd[1582]: 2025-11-24 01:46:22.692 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4c21c8f-271a-4e0d-ab8d-b3169fe61687", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2", Pod:"csi-node-driver-dc98b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e83083da5a", MAC:"ea:8f:e0:4b:e8:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:22.730722 containerd[1582]: 2025-11-24 01:46:22.725 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" Namespace="calico-system" Pod="csi-node-driver-dc98b" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-csi--node--driver--dc98b-eth0" Nov 24 01:46:22.799682 containerd[1582]: time="2025-11-24T01:46:22.798041756Z" level=info msg="connecting to shim 257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2" address="unix:///run/containerd/s/5ca611d40b8077a6e61652e8c954e4b3801cc65fd9b3cc1e4ab9dd1acf7d3742" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:22.801699 containerd[1582]: time="2025-11-24T01:46:22.801398181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ns8j8,Uid:e566338c-ba77-4549-8033-3f56c99af55d,Namespace:kube-system,Attempt:0,} returns sandbox id \"490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482\"" Nov 24 01:46:22.814036 containerd[1582]: time="2025-11-24T01:46:22.813875002Z" level=info msg="CreateContainer within sandbox \"490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 01:46:22.841736 containerd[1582]: time="2025-11-24T01:46:22.841577167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-7nccl,Uid:5debaa4f-5f0a-45c9-bc91-84f4de6609a5,Namespace:calico-apiserver,Attempt:0,}" Nov 24 01:46:22.859598 containerd[1582]: time="2025-11-24T01:46:22.857479633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rn29s,Uid:cff573f7-4f9e-4dab-b705-de47197417fc,Namespace:kube-system,Attempt:0,}" Nov 24 01:46:22.862805 systemd-networkd[1484]: calie4575f76c8a: Link UP Nov 24 01:46:22.871206 systemd-networkd[1484]: calie4575f76c8a: Gained carrier Nov 24 01:46:22.950931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119789768.mount: Deactivated successfully. Nov 24 01:46:22.959193 containerd[1582]: time="2025-11-24T01:46:22.959128052Z" level=info msg="Container ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.106 [INFO][4019] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.164 [INFO][4019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0 goldmane-666569f655- calico-system bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1 858 0 2025-11-24 01:45:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com goldmane-666569f655-k9g82 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie4575f76c8a [] [] }} ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.165 [INFO][4019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.535 [INFO][4056] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" HandleID="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Workload="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.539 [INFO][4056] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" HandleID="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Workload="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e490), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"goldmane-666569f655-k9g82", "timestamp":"2025-11-24 01:46:22.535268828 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.539 [INFO][4056] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.640 [INFO][4056] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.640 [INFO][4056] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.688 [INFO][4056] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.704 [INFO][4056] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.718 [INFO][4056] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.738 [INFO][4056] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.747 [INFO][4056] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.747 [INFO][4056] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.761 [INFO][4056] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8 Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.783 [INFO][4056] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.813 [INFO][4056] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.68/26] block=192.168.15.64/26 handle="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.814 [INFO][4056] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.68/26] handle="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.814 [INFO][4056] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:22.987115 containerd[1582]: 2025-11-24 01:46:22.814 [INFO][4056] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.68/26] IPv6=[] ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" HandleID="k8s-pod-network.aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Workload="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.992221 containerd[1582]: 2025-11-24 01:46:22.838 [INFO][4019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-k9g82", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4575f76c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:22.992221 containerd[1582]: 2025-11-24 01:46:22.839 [INFO][4019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.68/32] ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.992221 containerd[1582]: 2025-11-24 01:46:22.839 [INFO][4019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4575f76c8a ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.992221 containerd[1582]: 2025-11-24 01:46:22.876 [INFO][4019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.992221 containerd[1582]: 2025-11-24 01:46:22.882 [INFO][4019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8", Pod:"goldmane-666569f655-k9g82", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4575f76c8a", MAC:"1e:0c:42:96:b9:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:22.992221 containerd[1582]: 2025-11-24 01:46:22.935 [INFO][4019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" Namespace="calico-system" Pod="goldmane-666569f655-k9g82" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-goldmane--666569f655--k9g82-eth0" Nov 24 01:46:22.992221 containerd[1582]: time="2025-11-24T01:46:22.990487056Z" level=info msg="CreateContainer within sandbox \"490e490a61dedcb32e314461cd9e97a9a2629912821ccce4268354840c410482\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099\"" Nov 24 01:46:22.996944 containerd[1582]: time="2025-11-24T01:46:22.996893194Z" level=info msg="StartContainer for \"ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099\"" Nov 24 01:46:23.003487 containerd[1582]: time="2025-11-24T01:46:23.003264259Z" level=info msg="connecting to shim ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099" address="unix:///run/containerd/s/3a7143091bd1574b37de3c77652e782c8ba7c4ba1743b949ad3d0a3e5974855a" protocol=ttrpc version=3 Nov 24 01:46:23.063089 systemd[1]: Started cri-containerd-257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2.scope - libcontainer container 257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2. Nov 24 01:46:23.100821 containerd[1582]: time="2025-11-24T01:46:23.100716101Z" level=info msg="connecting to shim aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8" address="unix:///run/containerd/s/a641abf1b6c97894a6f6d55849ebb77bcd612cb7eb1877294f81bf285b2e114d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:23.156903 systemd[1]: Started cri-containerd-ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099.scope - libcontainer container ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099. Nov 24 01:46:23.218024 systemd[1]: Started cri-containerd-aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8.scope - libcontainer container aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8. Nov 24 01:46:23.443065 containerd[1582]: time="2025-11-24T01:46:23.442968213Z" level=info msg="StartContainer for \"ad35827b3cfbf4ee42fd78c885ad2b353da4654fcfa29f099169e4ced25d6099\" returns successfully" Nov 24 01:46:23.543454 containerd[1582]: time="2025-11-24T01:46:23.543235054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87b79d49b-4kkv5,Uid:dabb5be9-db72-477b-91d5-84b55db3018a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\"" Nov 24 01:46:23.557079 containerd[1582]: time="2025-11-24T01:46:23.557011489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 01:46:23.567841 systemd-networkd[1484]: calie7980131c1e: Gained IPv6LL Nov 24 01:46:23.631807 systemd-networkd[1484]: cali2d7b0705935: Gained IPv6LL Nov 24 01:46:23.662267 systemd-networkd[1484]: cali4f1e67267fe: Link UP Nov 24 01:46:23.669577 systemd-networkd[1484]: cali4f1e67267fe: Gained carrier Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.082 [INFO][4216] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.130 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0 calico-apiserver-687d5b8b8f- calico-apiserver 5debaa4f-5f0a-45c9-bc91-84f4de6609a5 866 0 2025-11-24 01:45:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:687d5b8b8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com calico-apiserver-687d5b8b8f-7nccl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f1e67267fe [] [] }} ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.130 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.455 [INFO][4290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" HandleID="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.461 [INFO][4290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" HandleID="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001036e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"calico-apiserver-687d5b8b8f-7nccl", "timestamp":"2025-11-24 01:46:23.455330483 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.461 [INFO][4290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.461 [INFO][4290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.461 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.516 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.534 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.554 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.561 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.567 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.567 [INFO][4290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.577 [INFO][4290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.601 [INFO][4290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.616 [INFO][4290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.69/26] block=192.168.15.64/26 handle="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.617 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.69/26] handle="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.617 [INFO][4290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:23.735001 containerd[1582]: 2025-11-24 01:46:23.618 [INFO][4290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.69/26] IPv6=[] ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" HandleID="k8s-pod-network.5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.740165 containerd[1582]: 2025-11-24 01:46:23.635 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0", GenerateName:"calico-apiserver-687d5b8b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5debaa4f-5f0a-45c9-bc91-84f4de6609a5", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687d5b8b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-687d5b8b8f-7nccl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f1e67267fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:23.740165 containerd[1582]: 2025-11-24 01:46:23.639 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.69/32] ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.740165 containerd[1582]: 2025-11-24 01:46:23.639 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f1e67267fe ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.740165 containerd[1582]: 2025-11-24 01:46:23.662 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.740165 containerd[1582]: 2025-11-24 01:46:23.663 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0", GenerateName:"calico-apiserver-687d5b8b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5debaa4f-5f0a-45c9-bc91-84f4de6609a5", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687d5b8b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e", Pod:"calico-apiserver-687d5b8b8f-7nccl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f1e67267fe", MAC:"f2:e0:14:ea:30:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:23.740165 containerd[1582]: 2025-11-24 01:46:23.730 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-7nccl" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--7nccl-eth0" Nov 24 01:46:23.824899 systemd-networkd[1484]: cali4e83083da5a: Gained IPv6LL Nov 24 01:46:23.853299 containerd[1582]: time="2025-11-24T01:46:23.852914187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-jvhv5,Uid:af4627b6-c7f1-489e-8935-b4a50923c295,Namespace:calico-apiserver,Attempt:0,}" Nov 24 01:46:23.868383 systemd-networkd[1484]: calic39e188e054: Link UP Nov 24 01:46:23.872370 systemd-networkd[1484]: calic39e188e054: Gained carrier Nov 24 01:46:23.928255 containerd[1582]: time="2025-11-24T01:46:23.926937494Z" level=info msg="connecting to shim 5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e" address="unix:///run/containerd/s/83dd4dfcd385d447ead3e772b25d61da30e655017b882cd8284f720e79da4d1f" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:23.936085 containerd[1582]: time="2025-11-24T01:46:23.936022206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc98b,Uid:d4c21c8f-271a-4e0d-ab8d-b3169fe61687,Namespace:calico-system,Attempt:0,} returns sandbox id \"257d9664a1e83ceeee4157a0569283a634fe326492cf84e1bb7086a915066fe2\"" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.198 [INFO][4221] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.274 [INFO][4221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0 coredns-674b8bbfcf- kube-system cff573f7-4f9e-4dab-b705-de47197417fc 867 0 2025-11-24 01:45:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com coredns-674b8bbfcf-rn29s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic39e188e054 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.274 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.497 [INFO][4318] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" HandleID="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Workload="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.499 [INFO][4318] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" HandleID="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Workload="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000183d80), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-rn29s", "timestamp":"2025-11-24 01:46:23.497169389 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.500 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.617 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.617 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.651 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.678 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.710 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.727 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.749 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.749 [INFO][4318] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.755 [INFO][4318] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2 Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.770 [INFO][4318] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.796 [INFO][4318] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.70/26] block=192.168.15.64/26 handle="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.798 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.70/26] handle="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.798 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:23.936253 containerd[1582]: 2025-11-24 01:46:23.798 [INFO][4318] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.70/26] IPv6=[] ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" HandleID="k8s-pod-network.473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Workload="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:23.939307 containerd[1582]: 2025-11-24 01:46:23.834 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cff573f7-4f9e-4dab-b705-de47197417fc", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-rn29s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic39e188e054", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:23.939307 containerd[1582]: 2025-11-24 01:46:23.836 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.70/32] ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:23.939307 containerd[1582]: 2025-11-24 01:46:23.837 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic39e188e054 ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:23.939307 containerd[1582]: 2025-11-24 01:46:23.876 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:23.939307 containerd[1582]: 2025-11-24 01:46:23.878 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cff573f7-4f9e-4dab-b705-de47197417fc", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2", Pod:"coredns-674b8bbfcf-rn29s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic39e188e054", MAC:"9e:f3:f9:f8:c3:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:23.939307 containerd[1582]: 2025-11-24 01:46:23.902 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-rn29s" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rn29s-eth0" Nov 24 01:46:24.020724 containerd[1582]: time="2025-11-24T01:46:24.020411942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:24.033018 containerd[1582]: time="2025-11-24T01:46:24.031919587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 01:46:24.033018 containerd[1582]: time="2025-11-24T01:46:24.032065468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 01:46:24.052944 kubelet[2900]: E1124 01:46:24.052560 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:46:24.053528 kubelet[2900]: E1124 01:46:24.053314 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:46:24.055307 containerd[1582]: time="2025-11-24T01:46:24.054063684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 01:46:24.056279 systemd[1]: Started cri-containerd-5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e.scope - libcontainer container 5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e. Nov 24 01:46:24.071042 kubelet[2900]: E1124 01:46:24.069840 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:238b70b5880e4fee8021234d1dfe7af1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jn5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-87b79d49b-4kkv5_calico-system(dabb5be9-db72-477b-91d5-84b55db3018a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:24.124475 containerd[1582]: time="2025-11-24T01:46:24.123459968Z" level=info msg="connecting to shim 473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2" address="unix:///run/containerd/s/8f746e94b94669c9e34c392008a94303267491c0ec19673d40c7d2f0d5ac0abf" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:24.339743 systemd[1]: Started cri-containerd-473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2.scope - libcontainer container 473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2. Nov 24 01:46:24.348645 containerd[1582]: time="2025-11-24T01:46:24.348539455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9g82,Uid:bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"aacf04144d78f7c7f0a9cf15350dcce5804756bf4570af810898921e40eef8f8\"" Nov 24 01:46:24.392893 containerd[1582]: time="2025-11-24T01:46:24.392746648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:24.403459 containerd[1582]: time="2025-11-24T01:46:24.403403298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 01:46:24.404443 containerd[1582]: time="2025-11-24T01:46:24.403947621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 01:46:24.405789 kubelet[2900]: E1124 01:46:24.405642 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:46:24.406920 kubelet[2900]: E1124 01:46:24.406677 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:46:24.407129 kubelet[2900]: E1124 01:46:24.407009 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:24.408329 containerd[1582]: time="2025-11-24T01:46:24.407840528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 01:46:24.419761 kubelet[2900]: I1124 01:46:24.419558 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ns8j8" podStartSLOduration=46.419522984 podStartE2EDuration="46.419522984s" podCreationTimestamp="2025-11-24 01:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 01:46:24.419223261 +0000 UTC m=+52.832011009" watchObservedRunningTime="2025-11-24 01:46:24.419522984 +0000 UTC m=+52.832310710" Nov 24 01:46:24.463812 systemd-networkd[1484]: calie4575f76c8a: Gained IPv6LL Nov 24 01:46:24.527719 containerd[1582]: time="2025-11-24T01:46:24.527392676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rn29s,Uid:cff573f7-4f9e-4dab-b705-de47197417fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2\"" Nov 24 01:46:24.542926 containerd[1582]: time="2025-11-24T01:46:24.542739658Z" level=info msg="CreateContainer within sandbox \"473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 01:46:24.575732 systemd-networkd[1484]: calif65cd8fa94a: Link UP Nov 24 01:46:24.581565 systemd-networkd[1484]: calif65cd8fa94a: Gained carrier Nov 24 01:46:24.594123 containerd[1582]: time="2025-11-24T01:46:24.593701702Z" level=info msg="Container 5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0: CDI devices from CRI Config.CDIDevices: []" Nov 24 01:46:24.617383 containerd[1582]: time="2025-11-24T01:46:24.617315241Z" level=info msg="CreateContainer within sandbox \"473817a99f4715a89a9d63d13c78de96e7f70b7dd766798a4b8d1fe52b9098e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0\"" Nov 24 01:46:24.620928 containerd[1582]: time="2025-11-24T01:46:24.620783055Z" level=info msg="StartContainer for \"5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0\"" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.136 [INFO][4421] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.227 [INFO][4421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0 calico-apiserver-687d5b8b8f- calico-apiserver af4627b6-c7f1-489e-8935-b4a50923c295 864 0 2025-11-24 01:45:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:687d5b8b8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com calico-apiserver-687d5b8b8f-jvhv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif65cd8fa94a [] [] }} ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.230 [INFO][4421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.423 [INFO][4522] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" HandleID="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.426 [INFO][4522] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" HandleID="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036b6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"calico-apiserver-687d5b8b8f-jvhv5", "timestamp":"2025-11-24 01:46:24.423509163 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.429 [INFO][4522] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.429 [INFO][4522] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.429 [INFO][4522] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.467 [INFO][4522] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.483 [INFO][4522] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.514 [INFO][4522] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.522 [INFO][4522] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.530 [INFO][4522] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.532 [INFO][4522] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.536 [INFO][4522] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325 Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.544 [INFO][4522] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.556 [INFO][4522] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.71/26] block=192.168.15.64/26 handle="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.556 [INFO][4522] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.71/26] handle="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.556 [INFO][4522] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:24.624855 containerd[1582]: 2025-11-24 01:46:24.556 [INFO][4522] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.71/26] IPv6=[] ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" HandleID="k8s-pod-network.82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.628049 containerd[1582]: 2025-11-24 01:46:24.566 [INFO][4421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0", GenerateName:"calico-apiserver-687d5b8b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"af4627b6-c7f1-489e-8935-b4a50923c295", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687d5b8b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-687d5b8b8f-jvhv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif65cd8fa94a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:24.628049 containerd[1582]: 2025-11-24 01:46:24.567 [INFO][4421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.71/32] ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.628049 containerd[1582]: 2025-11-24 01:46:24.567 [INFO][4421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif65cd8fa94a ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.628049 containerd[1582]: 2025-11-24 01:46:24.592 [INFO][4421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.628049 containerd[1582]: 2025-11-24 01:46:24.593 [INFO][4421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0", GenerateName:"calico-apiserver-687d5b8b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"af4627b6-c7f1-489e-8935-b4a50923c295", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687d5b8b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325", Pod:"calico-apiserver-687d5b8b8f-jvhv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif65cd8fa94a", MAC:"7a:27:77:c1:6c:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:24.628049 containerd[1582]: 2025-11-24 01:46:24.621 [INFO][4421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" Namespace="calico-apiserver" Pod="calico-apiserver-687d5b8b8f-jvhv5" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--apiserver--687d5b8b8f--jvhv5-eth0" Nov 24 01:46:24.628049 containerd[1582]: time="2025-11-24T01:46:24.625718799Z" level=info msg="connecting to shim 5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0" address="unix:///run/containerd/s/8f746e94b94669c9e34c392008a94303267491c0ec19673d40c7d2f0d5ac0abf" protocol=ttrpc version=3 Nov 24 01:46:24.697033 systemd[1]: Started cri-containerd-5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0.scope - libcontainer container 5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0. Nov 24 01:46:24.730501 containerd[1582]: time="2025-11-24T01:46:24.730437965Z" level=info msg="connecting to shim 82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325" address="unix:///run/containerd/s/287dc9d39f4441eb094176ea5b0d8583969f4fd6efd2af38f6d97b695cba5c59" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:24.752012 containerd[1582]: time="2025-11-24T01:46:24.751945137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:24.758196 containerd[1582]: time="2025-11-24T01:46:24.757316579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 01:46:24.758196 containerd[1582]: time="2025-11-24T01:46:24.757583145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 01:46:24.758402 kubelet[2900]: E1124 01:46:24.757776 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:46:24.758776 kubelet[2900]: E1124 01:46:24.758713 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:46:24.759331 kubelet[2900]: E1124 01:46:24.759093 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jn5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-87b79d49b-4kkv5_calico-system(dabb5be9-db72-477b-91d5-84b55db3018a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:24.768896 containerd[1582]: time="2025-11-24T01:46:24.767601074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 01:46:24.770370 kubelet[2900]: E1124 01:46:24.769749 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-87b79d49b-4kkv5" podUID="dabb5be9-db72-477b-91d5-84b55db3018a" Nov 24 01:46:24.824143 systemd[1]: Started cri-containerd-82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325.scope - libcontainer container 82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325. Nov 24 01:46:24.883825 containerd[1582]: time="2025-11-24T01:46:24.883772827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-7nccl,Uid:5debaa4f-5f0a-45c9-bc91-84f4de6609a5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5280caa601f530a07aaa7046b6ada15ea0a74686ed0524e4edc4fd9f888dd39e\"" Nov 24 01:46:24.946463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149439895.mount: Deactivated successfully. Nov 24 01:46:24.976185 containerd[1582]: time="2025-11-24T01:46:24.976033581Z" level=info msg="StartContainer for \"5515fda04059de01c57b5b8fc4596b2a8983204bc9ce5c19886f35e78bea2ee0\" returns successfully" Nov 24 01:46:25.116644 containerd[1582]: time="2025-11-24T01:46:25.116539966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:25.122723 containerd[1582]: time="2025-11-24T01:46:25.122610037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 01:46:25.122896 containerd[1582]: time="2025-11-24T01:46:25.122669828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 01:46:25.123665 kubelet[2900]: E1124 01:46:25.123308 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:46:25.123665 kubelet[2900]: E1124 01:46:25.123391 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:46:25.124467 kubelet[2900]: E1124 01:46:25.123923 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhpzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9g82_calico-system(bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:25.125095 containerd[1582]: time="2025-11-24T01:46:25.124921685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 01:46:25.126067 kubelet[2900]: E1124 01:46:25.126001 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:46:25.258730 containerd[1582]: time="2025-11-24T01:46:25.258428827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687d5b8b8f-jvhv5,Uid:af4627b6-c7f1-489e-8935-b4a50923c295,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"82c908d4507d19db4c42e39eb0b3ac36f06b326acbe69838ede9bae8c4635325\"" Nov 24 01:46:25.298556 systemd-networkd[1484]: cali4f1e67267fe: Gained IPv6LL Nov 24 01:46:25.388397 containerd[1582]: time="2025-11-24T01:46:25.388197610Z" level=info msg="StopPodSandbox for \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\"" Nov 24 01:46:25.393278 kubelet[2900]: E1124 01:46:25.393041 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:46:25.428327 systemd[1]: cri-containerd-ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7.scope: Deactivated successfully. Nov 24 01:46:25.438113 containerd[1582]: time="2025-11-24T01:46:25.438050307Z" level=info msg="received sandbox exit event container_id:\"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" id:\"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" exit_status:137 exited_at:{seconds:1763948785 nanos:436479316}" monitor_name=podsandbox Nov 24 01:46:25.448811 kubelet[2900]: I1124 01:46:25.448181 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rn29s" podStartSLOduration=47.44815744 podStartE2EDuration="47.44815744s" podCreationTimestamp="2025-11-24 01:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 01:46:25.415874021 +0000 UTC m=+53.828661785" watchObservedRunningTime="2025-11-24 01:46:25.44815744 +0000 UTC m=+53.860945160" Nov 24 01:46:25.464148 containerd[1582]: time="2025-11-24T01:46:25.463892965Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:25.465854 containerd[1582]: time="2025-11-24T01:46:25.465720252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 01:46:25.466145 containerd[1582]: time="2025-11-24T01:46:25.466026283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 01:46:25.467641 kubelet[2900]: E1124 01:46:25.466775 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:46:25.467641 kubelet[2900]: E1124 01:46:25.466839 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:46:25.467641 kubelet[2900]: E1124 01:46:25.467093 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:25.468839 kubelet[2900]: E1124 01:46:25.468785 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:25.470096 containerd[1582]: time="2025-11-24T01:46:25.470016850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:46:25.540334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7-rootfs.mount: Deactivated successfully. Nov 24 01:46:25.551581 containerd[1582]: time="2025-11-24T01:46:25.551461395Z" level=info msg="shim disconnected" id=ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7 namespace=k8s.io Nov 24 01:46:25.554338 containerd[1582]: time="2025-11-24T01:46:25.552393715Z" level=warning msg="cleaning up after shim disconnected" id=ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7 namespace=k8s.io Nov 24 01:46:25.560572 containerd[1582]: time="2025-11-24T01:46:25.552435156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 24 01:46:25.616321 containerd[1582]: time="2025-11-24T01:46:25.615817657Z" level=info msg="received sandbox container exit event sandbox_id:\"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" exit_status:137 exited_at:{seconds:1763948785 nanos:436479316}" monitor_name=criService Nov 24 01:46:25.623516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7-shm.mount: Deactivated successfully. Nov 24 01:46:25.800404 containerd[1582]: time="2025-11-24T01:46:25.799740242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:25.802062 containerd[1582]: time="2025-11-24T01:46:25.801980732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:46:25.804124 containerd[1582]: time="2025-11-24T01:46:25.802025802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:46:25.804226 kubelet[2900]: E1124 01:46:25.802438 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:25.804226 kubelet[2900]: E1124 01:46:25.802517 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:25.804226 kubelet[2900]: E1124 01:46:25.803280 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vvbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-7nccl_calico-apiserver(5debaa4f-5f0a-45c9-bc91-84f4de6609a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:25.805459 kubelet[2900]: E1124 01:46:25.805238 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:46:25.806928 containerd[1582]: time="2025-11-24T01:46:25.805968797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:46:25.824428 systemd-networkd[1484]: cali2d7b0705935: Link DOWN Nov 24 01:46:25.825244 systemd-networkd[1484]: cali2d7b0705935: Lost carrier Nov 24 01:46:25.935884 systemd-networkd[1484]: calic39e188e054: Gained IPv6LL Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.819 [INFO][4754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.820 [INFO][4754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" iface="eth0" netns="/var/run/netns/cni-f0b30745-e317-bf4e-46ce-673420b274c7" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.821 [INFO][4754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" iface="eth0" netns="/var/run/netns/cni-f0b30745-e317-bf4e-46ce-673420b274c7" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.844 [INFO][4754] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" after=24.18052ms iface="eth0" netns="/var/run/netns/cni-f0b30745-e317-bf4e-46ce-673420b274c7" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.845 [INFO][4754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.845 [INFO][4754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.972 [INFO][4773] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.972 [INFO][4773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:25.972 [INFO][4773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:26.060 [INFO][4773] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:26.060 [INFO][4773] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:26.063 [INFO][4773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:26.077724 containerd[1582]: 2025-11-24 01:46:26.071 [INFO][4754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:26.080219 containerd[1582]: time="2025-11-24T01:46:26.079203291Z" level=info msg="TearDown network for sandbox \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" successfully" Nov 24 01:46:26.080219 containerd[1582]: time="2025-11-24T01:46:26.079276387Z" level=info msg="StopPodSandbox for \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" returns successfully" Nov 24 01:46:26.092633 systemd[1]: run-netns-cni\x2df0b30745\x2de317\x2dbf4e\x2d46ce\x2d673420b274c7.mount: Deactivated successfully. Nov 24 01:46:26.151555 containerd[1582]: time="2025-11-24T01:46:26.150881792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:26.156224 containerd[1582]: time="2025-11-24T01:46:26.156013955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:46:26.156224 containerd[1582]: time="2025-11-24T01:46:26.156055767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:46:26.156748 kubelet[2900]: E1124 01:46:26.156571 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:26.158228 kubelet[2900]: E1124 01:46:26.157403 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:26.158228 kubelet[2900]: E1124 01:46:26.157596 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c897p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-jvhv5_calico-apiserver(af4627b6-c7f1-489e-8935-b4a50923c295): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:26.159637 kubelet[2900]: E1124 01:46:26.158878 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:46:26.171966 kubelet[2900]: I1124 01:46:26.171429 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jn5v\" (UniqueName: \"kubernetes.io/projected/dabb5be9-db72-477b-91d5-84b55db3018a-kube-api-access-4jn5v\") pod \"dabb5be9-db72-477b-91d5-84b55db3018a\" (UID: \"dabb5be9-db72-477b-91d5-84b55db3018a\") " Nov 24 01:46:26.171966 kubelet[2900]: I1124 01:46:26.171551 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-ca-bundle\") pod \"dabb5be9-db72-477b-91d5-84b55db3018a\" (UID: \"dabb5be9-db72-477b-91d5-84b55db3018a\") " Nov 24 01:46:26.183067 kubelet[2900]: I1124 01:46:26.181909 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-backend-key-pair\") pod \"dabb5be9-db72-477b-91d5-84b55db3018a\" (UID: \"dabb5be9-db72-477b-91d5-84b55db3018a\") " Nov 24 01:46:26.187765 kubelet[2900]: I1124 01:46:26.184127 2900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dabb5be9-db72-477b-91d5-84b55db3018a" (UID: "dabb5be9-db72-477b-91d5-84b55db3018a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 01:46:26.200100 kubelet[2900]: I1124 01:46:26.199988 2900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabb5be9-db72-477b-91d5-84b55db3018a-kube-api-access-4jn5v" (OuterVolumeSpecName: "kube-api-access-4jn5v") pod "dabb5be9-db72-477b-91d5-84b55db3018a" (UID: "dabb5be9-db72-477b-91d5-84b55db3018a"). InnerVolumeSpecName "kube-api-access-4jn5v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 01:46:26.204573 systemd[1]: var-lib-kubelet-pods-dabb5be9\x2ddb72\x2d477b\x2d91d5\x2d84b55db3018a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jn5v.mount: Deactivated successfully. Nov 24 01:46:26.205367 kubelet[2900]: I1124 01:46:26.205307 2900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dabb5be9-db72-477b-91d5-84b55db3018a" (UID: "dabb5be9-db72-477b-91d5-84b55db3018a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 01:46:26.206807 systemd[1]: var-lib-kubelet-pods-dabb5be9\x2ddb72\x2d477b\x2d91d5\x2d84b55db3018a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 01:46:26.282723 kubelet[2900]: I1124 01:46:26.282609 2900 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-ca-bundle\") on node \"srv-7vvyr.gb1.brightbox.com\" DevicePath \"\"" Nov 24 01:46:26.282723 kubelet[2900]: I1124 01:46:26.282721 2900 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dabb5be9-db72-477b-91d5-84b55db3018a-whisker-backend-key-pair\") on node \"srv-7vvyr.gb1.brightbox.com\" DevicePath \"\"" Nov 24 01:46:26.282723 kubelet[2900]: I1124 01:46:26.282740 2900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jn5v\" (UniqueName: \"kubernetes.io/projected/dabb5be9-db72-477b-91d5-84b55db3018a-kube-api-access-4jn5v\") on node \"srv-7vvyr.gb1.brightbox.com\" DevicePath \"\"" Nov 24 01:46:26.404278 systemd[1]: Removed slice kubepods-besteffort-poddabb5be9_db72_477b_91d5_84b55db3018a.slice - libcontainer container kubepods-besteffort-poddabb5be9_db72_477b_91d5_84b55db3018a.slice. Nov 24 01:46:26.409967 kubelet[2900]: E1124 01:46:26.409906 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:46:26.412213 kubelet[2900]: E1124 01:46:26.412131 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:46:26.412498 kubelet[2900]: E1124 01:46:26.412430 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:46:26.415716 kubelet[2900]: E1124 01:46:26.415018 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:26.575838 systemd-networkd[1484]: calif65cd8fa94a: Gained IPv6LL Nov 24 01:46:26.879329 systemd[1]: Created slice kubepods-besteffort-pod27e49083_c6b3_42ca_b3d9_4e1cc74718c7.slice - libcontainer container kubepods-besteffort-pod27e49083_c6b3_42ca_b3d9_4e1cc74718c7.slice. Nov 24 01:46:26.892455 kubelet[2900]: I1124 01:46:26.892376 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27e49083-c6b3-42ca-b3d9-4e1cc74718c7-whisker-backend-key-pair\") pod \"whisker-9cb9c49d6-kkt7h\" (UID: \"27e49083-c6b3-42ca-b3d9-4e1cc74718c7\") " pod="calico-system/whisker-9cb9c49d6-kkt7h" Nov 24 01:46:26.894323 kubelet[2900]: I1124 01:46:26.894207 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e49083-c6b3-42ca-b3d9-4e1cc74718c7-whisker-ca-bundle\") pod \"whisker-9cb9c49d6-kkt7h\" (UID: \"27e49083-c6b3-42ca-b3d9-4e1cc74718c7\") " pod="calico-system/whisker-9cb9c49d6-kkt7h" Nov 24 01:46:26.894510 kubelet[2900]: I1124 01:46:26.894464 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxhbq\" (UniqueName: \"kubernetes.io/projected/27e49083-c6b3-42ca-b3d9-4e1cc74718c7-kube-api-access-sxhbq\") pod \"whisker-9cb9c49d6-kkt7h\" (UID: \"27e49083-c6b3-42ca-b3d9-4e1cc74718c7\") " pod="calico-system/whisker-9cb9c49d6-kkt7h" Nov 24 01:46:27.119177 systemd-networkd[1484]: vxlan.calico: Link UP Nov 24 01:46:27.119192 systemd-networkd[1484]: vxlan.calico: Gained carrier Nov 24 01:46:27.190134 containerd[1582]: time="2025-11-24T01:46:27.189943918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9cb9c49d6-kkt7h,Uid:27e49083-c6b3-42ca-b3d9-4e1cc74718c7,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:27.534920 systemd-networkd[1484]: calid5ccef37fc8: Link UP Nov 24 01:46:27.537243 systemd-networkd[1484]: calid5ccef37fc8: Gained carrier Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.355 [INFO][4840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0 whisker-9cb9c49d6- calico-system 27e49083-c6b3-42ca-b3d9-4e1cc74718c7 1056 0 2025-11-24 01:46:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9cb9c49d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com whisker-9cb9c49d6-kkt7h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid5ccef37fc8 [] [] }} ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.355 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.453 [INFO][4862] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" HandleID="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.454 [INFO][4862] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" HandleID="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8110), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"whisker-9cb9c49d6-kkt7h", "timestamp":"2025-11-24 01:46:27.45345079 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.454 [INFO][4862] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.454 [INFO][4862] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.454 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.466 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.476 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.489 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.493 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.498 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.498 [INFO][4862] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.502 [INFO][4862] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5 Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.511 [INFO][4862] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.522 [INFO][4862] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.72/26] block=192.168.15.64/26 handle="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.523 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.72/26] handle="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.523 [INFO][4862] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:27.567603 containerd[1582]: 2025-11-24 01:46:27.523 [INFO][4862] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.72/26] IPv6=[] ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" HandleID="k8s-pod-network.490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.571519 containerd[1582]: 2025-11-24 01:46:27.528 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0", GenerateName:"whisker-9cb9c49d6-", Namespace:"calico-system", SelfLink:"", UID:"27e49083-c6b3-42ca-b3d9-4e1cc74718c7", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 46, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9cb9c49d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"whisker-9cb9c49d6-kkt7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid5ccef37fc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:27.571519 containerd[1582]: 2025-11-24 01:46:27.528 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.72/32] ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.571519 containerd[1582]: 2025-11-24 01:46:27.528 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5ccef37fc8 ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.571519 containerd[1582]: 2025-11-24 01:46:27.538 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.571519 containerd[1582]: 2025-11-24 01:46:27.539 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0", GenerateName:"whisker-9cb9c49d6-", Namespace:"calico-system", SelfLink:"", UID:"27e49083-c6b3-42ca-b3d9-4e1cc74718c7", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 46, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9cb9c49d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5", Pod:"whisker-9cb9c49d6-kkt7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid5ccef37fc8", MAC:"02:01:19:50:5b:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:27.571519 containerd[1582]: 2025-11-24 01:46:27.560 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" Namespace="calico-system" Pod="whisker-9cb9c49d6-kkt7h" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--9cb9c49d6--kkt7h-eth0" Nov 24 01:46:27.671385 containerd[1582]: time="2025-11-24T01:46:27.671117402Z" level=info msg="connecting to shim 490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5" address="unix:///run/containerd/s/7d13b7ddf0ebfdfccce66e7a43cbbcd97d00e14c0900009514d3fb6df5e7587d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:27.733901 systemd[1]: Started cri-containerd-490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5.scope - libcontainer container 490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5. Nov 24 01:46:27.845594 kubelet[2900]: I1124 01:46:27.845518 2900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabb5be9-db72-477b-91d5-84b55db3018a" path="/var/lib/kubelet/pods/dabb5be9-db72-477b-91d5-84b55db3018a/volumes" Nov 24 01:46:28.003805 containerd[1582]: time="2025-11-24T01:46:28.003684015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9cb9c49d6-kkt7h,Uid:27e49083-c6b3-42ca-b3d9-4e1cc74718c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"490d613056322d7c59b1793c2b9bcee24c431896e3f6be64b6731e0b7ff3ffb5\"" Nov 24 01:46:28.007550 containerd[1582]: time="2025-11-24T01:46:28.007489181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 01:46:28.320385 containerd[1582]: time="2025-11-24T01:46:28.320051438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:28.322576 containerd[1582]: time="2025-11-24T01:46:28.322292715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 01:46:28.322576 containerd[1582]: time="2025-11-24T01:46:28.322419273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 01:46:28.323287 kubelet[2900]: E1124 01:46:28.323010 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:46:28.323577 kubelet[2900]: E1124 01:46:28.323423 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:46:28.324162 kubelet[2900]: E1124 01:46:28.324053 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:238b70b5880e4fee8021234d1dfe7af1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:28.327741 containerd[1582]: time="2025-11-24T01:46:28.327595851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 01:46:28.495957 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Nov 24 01:46:28.651354 containerd[1582]: time="2025-11-24T01:46:28.650766507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:28.652282 containerd[1582]: time="2025-11-24T01:46:28.652240078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 01:46:28.652662 containerd[1582]: time="2025-11-24T01:46:28.652342847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 01:46:28.652876 kubelet[2900]: E1124 01:46:28.652805 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:46:28.652956 kubelet[2900]: E1124 01:46:28.652905 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:46:28.653663 kubelet[2900]: E1124 01:46:28.653253 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:28.655177 kubelet[2900]: E1124 01:46:28.655045 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:46:29.410310 kubelet[2900]: E1124 01:46:29.410249 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:46:29.456058 systemd-networkd[1484]: calid5ccef37fc8: Gained IPv6LL Nov 24 01:46:31.781602 containerd[1582]: time="2025-11-24T01:46:31.781547589Z" level=info msg="StopPodSandbox for \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\"" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.840 [WARNING][4985] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.841 [INFO][4985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.841 [INFO][4985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" iface="eth0" netns="" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.841 [INFO][4985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.841 [INFO][4985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.885 [INFO][4992] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.886 [INFO][4992] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.886 [INFO][4992] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.912 [WARNING][4992] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.913 [INFO][4992] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.917 [INFO][4992] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:31.925274 containerd[1582]: 2025-11-24 01:46:31.921 [INFO][4985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:31.925274 containerd[1582]: time="2025-11-24T01:46:31.925097814Z" level=info msg="TearDown network for sandbox \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" successfully" Nov 24 01:46:31.925274 containerd[1582]: time="2025-11-24T01:46:31.925131446Z" level=info msg="StopPodSandbox for \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" returns successfully" Nov 24 01:46:31.926782 containerd[1582]: time="2025-11-24T01:46:31.926353296Z" level=info msg="RemovePodSandbox for \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\"" Nov 24 01:46:31.927727 containerd[1582]: time="2025-11-24T01:46:31.927679948Z" level=info msg="Forcibly stopping sandbox \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\"" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:31.996 [WARNING][5009] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:31.996 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:31.996 [INFO][5009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" iface="eth0" netns="" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:31.996 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:31.996 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.032 [INFO][5017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.032 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.032 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.042 [WARNING][5017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.042 [INFO][5017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" HandleID="k8s-pod-network.ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Workload="srv--7vvyr.gb1.brightbox.com-k8s-whisker--87b79d49b--4kkv5-eth0" Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.045 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:32.053052 containerd[1582]: 2025-11-24 01:46:32.048 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7" Nov 24 01:46:32.054167 containerd[1582]: time="2025-11-24T01:46:32.053037154Z" level=info msg="TearDown network for sandbox \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" successfully" Nov 24 01:46:32.058802 containerd[1582]: time="2025-11-24T01:46:32.058748413Z" level=info msg="Ensure that sandbox ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7 in task-service has been cleanup successfully" Nov 24 01:46:32.068137 containerd[1582]: time="2025-11-24T01:46:32.068063775Z" level=info msg="RemovePodSandbox \"ee3691f02e3b721a8250180cdee3bd9e849bb5fd09d11f3a78e1ae80a410b2d7\" returns successfully" Nov 24 01:46:32.841578 containerd[1582]: time="2025-11-24T01:46:32.841509844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5d79d7cd-lvqjg,Uid:621aa00e-6d25-484a-b356-0b520628e4b2,Namespace:calico-system,Attempt:0,}" Nov 24 01:46:33.024959 systemd-networkd[1484]: caliab89382471e: Link UP Nov 24 01:46:33.026994 systemd-networkd[1484]: caliab89382471e: Gained carrier Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.911 [INFO][5024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0 calico-kube-controllers-6f5d79d7cd- calico-system 621aa00e-6d25-484a-b356-0b520628e4b2 860 0 2025-11-24 01:45:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f5d79d7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-7vvyr.gb1.brightbox.com calico-kube-controllers-6f5d79d7cd-lvqjg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab89382471e [] [] }} ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.913 [INFO][5024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.953 [INFO][5036] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" HandleID="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.954 [INFO][5036] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" HandleID="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-7vvyr.gb1.brightbox.com", "pod":"calico-kube-controllers-6f5d79d7cd-lvqjg", "timestamp":"2025-11-24 01:46:32.953953161 +0000 UTC"}, Hostname:"srv-7vvyr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.954 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.954 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.954 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-7vvyr.gb1.brightbox.com' Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.965 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.976 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.986 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.990 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.994 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.995 [INFO][5036] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:32.997 [INFO][5036] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10 Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:33.003 [INFO][5036] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:33.014 [INFO][5036] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.73/26] block=192.168.15.64/26 handle="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:33.014 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.73/26] handle="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" host="srv-7vvyr.gb1.brightbox.com" Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:33.014 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 01:46:33.055162 containerd[1582]: 2025-11-24 01:46:33.014 [INFO][5036] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.73/26] IPv6=[] ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" HandleID="k8s-pod-network.679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Workload="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.058524 containerd[1582]: 2025-11-24 01:46:33.018 [INFO][5024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0", GenerateName:"calico-kube-controllers-6f5d79d7cd-", Namespace:"calico-system", SelfLink:"", UID:"621aa00e-6d25-484a-b356-0b520628e4b2", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f5d79d7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6f5d79d7cd-lvqjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab89382471e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:33.058524 containerd[1582]: 2025-11-24 01:46:33.019 [INFO][5024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.73/32] ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.058524 containerd[1582]: 2025-11-24 01:46:33.019 [INFO][5024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab89382471e ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.058524 containerd[1582]: 2025-11-24 01:46:33.028 [INFO][5024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.058524 containerd[1582]: 2025-11-24 01:46:33.029 [INFO][5024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0", GenerateName:"calico-kube-controllers-6f5d79d7cd-", Namespace:"calico-system", SelfLink:"", UID:"621aa00e-6d25-484a-b356-0b520628e4b2", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 1, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f5d79d7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-7vvyr.gb1.brightbox.com", ContainerID:"679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10", Pod:"calico-kube-controllers-6f5d79d7cd-lvqjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab89382471e", MAC:"0a:32:49:b5:85:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 01:46:33.058524 containerd[1582]: 2025-11-24 01:46:33.045 [INFO][5024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" Namespace="calico-system" Pod="calico-kube-controllers-6f5d79d7cd-lvqjg" WorkloadEndpoint="srv--7vvyr.gb1.brightbox.com-k8s-calico--kube--controllers--6f5d79d7cd--lvqjg-eth0" Nov 24 01:46:33.090963 containerd[1582]: time="2025-11-24T01:46:33.090884770Z" level=info msg="connecting to shim 679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10" address="unix:///run/containerd/s/08d542a2db5506b591441ccc1bceff870beeffcbb4c5b053366e6f76db8c7260" namespace=k8s.io protocol=ttrpc version=3 Nov 24 01:46:33.137208 systemd[1]: Started cri-containerd-679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10.scope - libcontainer container 679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10. Nov 24 01:46:33.226315 containerd[1582]: time="2025-11-24T01:46:33.226234424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f5d79d7cd-lvqjg,Uid:621aa00e-6d25-484a-b356-0b520628e4b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"679f939e29061c522b7c40a4203102a96219cb3efc2acfe4503008fbabc93f10\"" Nov 24 01:46:33.230787 containerd[1582]: time="2025-11-24T01:46:33.230732290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 01:46:33.543642 containerd[1582]: time="2025-11-24T01:46:33.543540146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:33.544774 containerd[1582]: time="2025-11-24T01:46:33.544722789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 01:46:33.545034 containerd[1582]: time="2025-11-24T01:46:33.544732911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 01:46:33.545126 kubelet[2900]: E1124 01:46:33.545056 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:46:33.546157 kubelet[2900]: E1124 01:46:33.545131 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:46:33.546157 kubelet[2900]: E1124 01:46:33.545311 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8mnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:33.546696 kubelet[2900]: E1124 01:46:33.546654 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:46:34.421952 kubelet[2900]: E1124 01:46:34.421700 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:46:34.447833 systemd-networkd[1484]: caliab89382471e: Gained IPv6LL Nov 24 01:46:37.842763 containerd[1582]: time="2025-11-24T01:46:37.842675388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 01:46:38.180662 containerd[1582]: time="2025-11-24T01:46:38.180427973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:38.182319 containerd[1582]: time="2025-11-24T01:46:38.182081940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 01:46:38.182319 containerd[1582]: time="2025-11-24T01:46:38.182124928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 01:46:38.182737 kubelet[2900]: E1124 01:46:38.182356 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:46:38.182737 kubelet[2900]: E1124 01:46:38.182433 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:46:38.183755 kubelet[2900]: E1124 01:46:38.182660 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhpzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9g82_calico-system(bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:38.184871 kubelet[2900]: E1124 01:46:38.184816 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:46:38.845947 containerd[1582]: time="2025-11-24T01:46:38.845796299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 01:46:39.156394 containerd[1582]: time="2025-11-24T01:46:39.156145004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:39.157755 containerd[1582]: time="2025-11-24T01:46:39.157670871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 01:46:39.157860 containerd[1582]: time="2025-11-24T01:46:39.157720057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 01:46:39.158497 kubelet[2900]: E1124 01:46:39.158035 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:46:39.158497 kubelet[2900]: E1124 01:46:39.158132 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:46:39.158497 kubelet[2900]: E1124 01:46:39.158398 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:39.162099 containerd[1582]: time="2025-11-24T01:46:39.161896652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 01:46:39.469990 containerd[1582]: time="2025-11-24T01:46:39.469767129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:39.471744 containerd[1582]: time="2025-11-24T01:46:39.471664030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 01:46:39.471916 containerd[1582]: time="2025-11-24T01:46:39.471774195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 01:46:39.472138 kubelet[2900]: E1124 01:46:39.472054 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:46:39.474034 kubelet[2900]: E1124 01:46:39.472134 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:46:39.474034 kubelet[2900]: E1124 01:46:39.472319 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:39.474034 kubelet[2900]: E1124 01:46:39.473655 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:39.845656 containerd[1582]: time="2025-11-24T01:46:39.845479170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:46:40.159900 containerd[1582]: time="2025-11-24T01:46:40.159843054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:40.161892 containerd[1582]: time="2025-11-24T01:46:40.161790655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:46:40.161980 containerd[1582]: time="2025-11-24T01:46:40.161887397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:46:40.162459 kubelet[2900]: E1124 01:46:40.162332 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:40.162744 kubelet[2900]: E1124 01:46:40.162427 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:40.163167 kubelet[2900]: E1124 01:46:40.162906 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c897p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-jvhv5_calico-apiserver(af4627b6-c7f1-489e-8935-b4a50923c295): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:40.164193 containerd[1582]: time="2025-11-24T01:46:40.163897433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 01:46:40.164490 kubelet[2900]: E1124 01:46:40.164147 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:46:40.475264 containerd[1582]: time="2025-11-24T01:46:40.474556291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:40.476286 containerd[1582]: time="2025-11-24T01:46:40.476211736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 01:46:40.476410 containerd[1582]: time="2025-11-24T01:46:40.476334780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 01:46:40.476735 kubelet[2900]: E1124 01:46:40.476670 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:46:40.477253 kubelet[2900]: E1124 01:46:40.476744 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:46:40.477253 kubelet[2900]: E1124 01:46:40.476969 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:238b70b5880e4fee8021234d1dfe7af1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:40.479277 containerd[1582]: time="2025-11-24T01:46:40.479237592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 01:46:40.797062 containerd[1582]: time="2025-11-24T01:46:40.796834363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:40.798266 containerd[1582]: time="2025-11-24T01:46:40.798212512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 01:46:40.798333 containerd[1582]: time="2025-11-24T01:46:40.798312908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 01:46:40.798606 kubelet[2900]: E1124 01:46:40.798536 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:46:40.798725 kubelet[2900]: E1124 01:46:40.798672 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:46:40.799331 kubelet[2900]: E1124 01:46:40.799142 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:40.800732 kubelet[2900]: E1124 01:46:40.800675 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:46:40.845242 containerd[1582]: time="2025-11-24T01:46:40.845175173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:46:41.155184 containerd[1582]: time="2025-11-24T01:46:41.154996000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:41.156760 containerd[1582]: time="2025-11-24T01:46:41.156591338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:46:41.156760 containerd[1582]: time="2025-11-24T01:46:41.156726073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:46:41.157009 kubelet[2900]: E1124 01:46:41.156946 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:41.157846 kubelet[2900]: E1124 01:46:41.157028 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:46:41.157846 kubelet[2900]: E1124 01:46:41.157234 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vvbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-7nccl_calico-apiserver(5debaa4f-5f0a-45c9-bc91-84f4de6609a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:41.158403 kubelet[2900]: E1124 01:46:41.158341 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:46:46.842772 containerd[1582]: time="2025-11-24T01:46:46.842671896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 01:46:47.159096 containerd[1582]: time="2025-11-24T01:46:47.158860762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:46:47.160484 containerd[1582]: time="2025-11-24T01:46:47.160286603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 01:46:47.160484 containerd[1582]: time="2025-11-24T01:46:47.160438597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 01:46:47.160866 kubelet[2900]: E1124 01:46:47.160799 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:46:47.161451 kubelet[2900]: E1124 01:46:47.160882 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:46:47.161451 kubelet[2900]: E1124 01:46:47.161071 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8mnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 01:46:47.162503 kubelet[2900]: E1124 01:46:47.162302 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:46:50.842810 kubelet[2900]: E1124 01:46:50.842245 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:46:51.845333 kubelet[2900]: E1124 01:46:51.844776 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:46:52.843786 kubelet[2900]: E1124 01:46:52.843580 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:46:53.845250 kubelet[2900]: E1124 01:46:53.845085 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:46:53.847110 kubelet[2900]: E1124 01:46:53.846217 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:46:59.845492 kubelet[2900]: E1124 01:46:59.845418 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:47:04.844693 containerd[1582]: time="2025-11-24T01:47:04.844634658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 01:47:05.181893 containerd[1582]: time="2025-11-24T01:47:05.181797068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:05.183875 containerd[1582]: time="2025-11-24T01:47:05.183705313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 01:47:05.183875 containerd[1582]: time="2025-11-24T01:47:05.183741091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 01:47:05.185265 kubelet[2900]: E1124 01:47:05.184305 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:47:05.185265 kubelet[2900]: E1124 01:47:05.184384 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:47:05.185265 kubelet[2900]: E1124 01:47:05.184701 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhpzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9g82_calico-system(bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:05.186516 containerd[1582]: time="2025-11-24T01:47:05.186342871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 01:47:05.186603 kubelet[2900]: E1124 01:47:05.186456 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:47:05.495926 containerd[1582]: time="2025-11-24T01:47:05.495329034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:05.496836 containerd[1582]: time="2025-11-24T01:47:05.496785700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 01:47:05.496915 containerd[1582]: time="2025-11-24T01:47:05.496888741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 01:47:05.497634 kubelet[2900]: E1124 01:47:05.497095 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:47:05.497634 kubelet[2900]: E1124 01:47:05.497168 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:47:05.497634 kubelet[2900]: E1124 01:47:05.497428 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:238b70b5880e4fee8021234d1dfe7af1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:05.500891 containerd[1582]: time="2025-11-24T01:47:05.498370858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:47:05.815741 containerd[1582]: time="2025-11-24T01:47:05.814948150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:05.817311 containerd[1582]: time="2025-11-24T01:47:05.817130767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:47:05.817311 containerd[1582]: time="2025-11-24T01:47:05.817215924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:47:05.817600 kubelet[2900]: E1124 01:47:05.817517 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:05.817689 kubelet[2900]: E1124 01:47:05.817662 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:05.818299 kubelet[2900]: E1124 01:47:05.818037 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vvbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-7nccl_calico-apiserver(5debaa4f-5f0a-45c9-bc91-84f4de6609a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:05.819635 containerd[1582]: time="2025-11-24T01:47:05.819001359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 01:47:05.819852 kubelet[2900]: E1124 01:47:05.819790 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:47:06.150426 containerd[1582]: time="2025-11-24T01:47:06.149730195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:06.157588 containerd[1582]: time="2025-11-24T01:47:06.157399597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 01:47:06.157588 containerd[1582]: time="2025-11-24T01:47:06.157432695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 01:47:06.158832 containerd[1582]: time="2025-11-24T01:47:06.158342749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:47:06.158915 kubelet[2900]: E1124 01:47:06.157840 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:47:06.158915 kubelet[2900]: E1124 01:47:06.157923 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:47:06.158915 kubelet[2900]: E1124 01:47:06.158379 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:06.160150 kubelet[2900]: E1124 01:47:06.159878 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:47:06.464688 containerd[1582]: time="2025-11-24T01:47:06.464159411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:06.467394 containerd[1582]: time="2025-11-24T01:47:06.467329235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:47:06.467862 containerd[1582]: time="2025-11-24T01:47:06.467470773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:47:06.468748 kubelet[2900]: E1124 01:47:06.467694 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:06.468748 kubelet[2900]: E1124 01:47:06.467755 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:06.468748 kubelet[2900]: E1124 01:47:06.468248 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c897p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-jvhv5_calico-apiserver(af4627b6-c7f1-489e-8935-b4a50923c295): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:06.470215 kubelet[2900]: E1124 01:47:06.469821 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:47:06.470794 containerd[1582]: time="2025-11-24T01:47:06.470758687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 01:47:06.809401 containerd[1582]: time="2025-11-24T01:47:06.809080258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:06.812840 containerd[1582]: time="2025-11-24T01:47:06.812729259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 01:47:06.813015 containerd[1582]: time="2025-11-24T01:47:06.812729276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 01:47:06.813978 kubelet[2900]: E1124 01:47:06.813240 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:47:06.813978 kubelet[2900]: E1124 01:47:06.813313 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:47:06.813978 kubelet[2900]: E1124 01:47:06.813486 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:06.816668 containerd[1582]: time="2025-11-24T01:47:06.816589201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 01:47:07.124018 containerd[1582]: time="2025-11-24T01:47:07.123268599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:07.127091 containerd[1582]: time="2025-11-24T01:47:07.127021166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 01:47:07.127214 containerd[1582]: time="2025-11-24T01:47:07.127058480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 01:47:07.127640 kubelet[2900]: E1124 01:47:07.127452 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:47:07.127640 kubelet[2900]: E1124 01:47:07.127540 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:47:07.129565 kubelet[2900]: E1124 01:47:07.129416 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:07.131472 kubelet[2900]: E1124 01:47:07.131395 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:47:13.843707 containerd[1582]: time="2025-11-24T01:47:13.843269650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 01:47:14.153344 containerd[1582]: time="2025-11-24T01:47:14.153283326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:14.156243 containerd[1582]: time="2025-11-24T01:47:14.155003051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 01:47:14.156243 containerd[1582]: time="2025-11-24T01:47:14.155134025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 01:47:14.156956 kubelet[2900]: E1124 01:47:14.156878 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:47:14.157912 kubelet[2900]: E1124 01:47:14.157472 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:47:14.157912 kubelet[2900]: E1124 01:47:14.157799 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8mnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:14.159166 kubelet[2900]: E1124 01:47:14.159079 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:47:16.844938 kubelet[2900]: E1124 01:47:16.844819 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:47:17.842534 kubelet[2900]: E1124 01:47:17.841809 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:47:18.843142 kubelet[2900]: E1124 01:47:18.842906 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:47:19.846816 kubelet[2900]: E1124 01:47:19.846750 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:47:20.842291 kubelet[2900]: E1124 01:47:20.842229 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:47:25.844282 kubelet[2900]: E1124 01:47:25.843800 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:47:25.923121 systemd[1]: Started sshd@9-10.230.76.74:22-139.178.68.195:57462.service - OpenSSH per-connection server daemon (139.178.68.195:57462). Nov 24 01:47:26.921474 sshd[5198]: Accepted publickey for core from 139.178.68.195 port 57462 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:26.923305 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:26.936853 systemd-logind[1561]: New session 12 of user core. Nov 24 01:47:26.945135 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 01:47:28.225263 sshd[5202]: Connection closed by 139.178.68.195 port 57462 Nov 24 01:47:28.225552 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Nov 24 01:47:28.234803 systemd[1]: sshd@9-10.230.76.74:22-139.178.68.195:57462.service: Deactivated successfully. Nov 24 01:47:28.239121 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 01:47:28.241233 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Nov 24 01:47:28.245779 systemd-logind[1561]: Removed session 12. Nov 24 01:47:29.849002 kubelet[2900]: E1124 01:47:29.848488 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:47:30.841704 kubelet[2900]: E1124 01:47:30.841635 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:47:31.845556 kubelet[2900]: E1124 01:47:31.845478 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:47:32.851698 kubelet[2900]: E1124 01:47:32.850114 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:47:33.387728 systemd[1]: Started sshd@10-10.230.76.74:22-139.178.68.195:57714.service - OpenSSH per-connection server daemon (139.178.68.195:57714). Nov 24 01:47:33.847253 kubelet[2900]: E1124 01:47:33.847192 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:47:34.334311 sshd[5218]: Accepted publickey for core from 139.178.68.195 port 57714 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:34.337119 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:34.346406 systemd-logind[1561]: New session 13 of user core. Nov 24 01:47:34.356143 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 01:47:35.138641 sshd[5221]: Connection closed by 139.178.68.195 port 57714 Nov 24 01:47:35.138216 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Nov 24 01:47:35.147585 systemd[1]: sshd@10-10.230.76.74:22-139.178.68.195:57714.service: Deactivated successfully. Nov 24 01:47:35.153625 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 01:47:35.158079 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Nov 24 01:47:35.163680 systemd-logind[1561]: Removed session 13. Nov 24 01:47:39.846325 kubelet[2900]: E1124 01:47:39.846265 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:47:40.299258 systemd[1]: Started sshd@11-10.230.76.74:22-139.178.68.195:57728.service - OpenSSH per-connection server daemon (139.178.68.195:57728). Nov 24 01:47:41.335785 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 57728 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:41.338492 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:41.347585 systemd-logind[1561]: New session 14 of user core. Nov 24 01:47:41.353156 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 01:47:41.846076 kubelet[2900]: E1124 01:47:41.845501 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:47:42.202759 sshd[5238]: Connection closed by 139.178.68.195 port 57728 Nov 24 01:47:42.206925 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Nov 24 01:47:42.215460 systemd[1]: sshd@11-10.230.76.74:22-139.178.68.195:57728.service: Deactivated successfully. Nov 24 01:47:42.215782 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Nov 24 01:47:42.221938 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 01:47:42.230944 systemd-logind[1561]: Removed session 14. Nov 24 01:47:42.353996 systemd[1]: Started sshd@12-10.230.76.74:22-139.178.68.195:38604.service - OpenSSH per-connection server daemon (139.178.68.195:38604). Nov 24 01:47:42.843540 kubelet[2900]: E1124 01:47:42.843471 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:47:43.266844 sshd[5250]: Accepted publickey for core from 139.178.68.195 port 38604 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:43.268886 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:43.278157 systemd-logind[1561]: New session 15 of user core. Nov 24 01:47:43.284945 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 01:47:44.165472 sshd[5253]: Connection closed by 139.178.68.195 port 38604 Nov 24 01:47:44.166955 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Nov 24 01:47:44.173114 systemd[1]: sshd@12-10.230.76.74:22-139.178.68.195:38604.service: Deactivated successfully. Nov 24 01:47:44.177210 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 01:47:44.180495 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Nov 24 01:47:44.184769 systemd-logind[1561]: Removed session 15. Nov 24 01:47:44.324282 systemd[1]: Started sshd@13-10.230.76.74:22-139.178.68.195:38616.service - OpenSSH per-connection server daemon (139.178.68.195:38616). Nov 24 01:47:44.844196 kubelet[2900]: E1124 01:47:44.844085 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:47:45.275010 sshd[5263]: Accepted publickey for core from 139.178.68.195 port 38616 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:45.278582 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:45.289710 systemd-logind[1561]: New session 16 of user core. Nov 24 01:47:45.297939 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 01:47:45.852052 kubelet[2900]: E1124 01:47:45.851948 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:47:46.111761 sshd[5266]: Connection closed by 139.178.68.195 port 38616 Nov 24 01:47:46.111593 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Nov 24 01:47:46.125415 systemd[1]: sshd@13-10.230.76.74:22-139.178.68.195:38616.service: Deactivated successfully. Nov 24 01:47:46.129281 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 01:47:46.131312 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Nov 24 01:47:46.133923 systemd-logind[1561]: Removed session 16. Nov 24 01:47:47.845372 containerd[1582]: time="2025-11-24T01:47:47.844811367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:47:48.186411 containerd[1582]: time="2025-11-24T01:47:48.186345743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:48.191186 containerd[1582]: time="2025-11-24T01:47:48.191103278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:47:48.191332 containerd[1582]: time="2025-11-24T01:47:48.191261173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:47:48.191712 kubelet[2900]: E1124 01:47:48.191456 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:48.191712 kubelet[2900]: E1124 01:47:48.191518 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:48.194411 kubelet[2900]: E1124 01:47:48.193260 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c897p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-jvhv5_calico-apiserver(af4627b6-c7f1-489e-8935-b4a50923c295): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:48.194759 kubelet[2900]: E1124 01:47:48.194697 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:47:51.302233 systemd[1]: Started sshd@14-10.230.76.74:22-139.178.68.195:52012.service - OpenSSH per-connection server daemon (139.178.68.195:52012). Nov 24 01:47:51.845815 kubelet[2900]: E1124 01:47:51.845522 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:47:52.345165 sshd[5291]: Accepted publickey for core from 139.178.68.195 port 52012 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:52.346319 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:52.355702 systemd-logind[1561]: New session 17 of user core. Nov 24 01:47:52.361929 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 01:47:53.190776 sshd[5294]: Connection closed by 139.178.68.195 port 52012 Nov 24 01:47:53.191686 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Nov 24 01:47:53.198345 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Nov 24 01:47:53.199814 systemd[1]: sshd@14-10.230.76.74:22-139.178.68.195:52012.service: Deactivated successfully. Nov 24 01:47:53.205969 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 01:47:53.212526 systemd-logind[1561]: Removed session 17. Nov 24 01:47:54.844924 containerd[1582]: time="2025-11-24T01:47:54.843357848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 01:47:55.163249 containerd[1582]: time="2025-11-24T01:47:55.163113589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:55.165242 containerd[1582]: time="2025-11-24T01:47:55.165091695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 01:47:55.165242 containerd[1582]: time="2025-11-24T01:47:55.165201228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 01:47:55.165754 kubelet[2900]: E1124 01:47:55.165661 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:55.167598 kubelet[2900]: E1124 01:47:55.165787 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 01:47:55.167598 kubelet[2900]: E1124 01:47:55.166006 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vvbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-687d5b8b8f-7nccl_calico-apiserver(5debaa4f-5f0a-45c9-bc91-84f4de6609a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:55.167598 kubelet[2900]: E1124 01:47:55.167304 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:47:55.846718 containerd[1582]: time="2025-11-24T01:47:55.845869398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 01:47:56.162652 containerd[1582]: time="2025-11-24T01:47:56.161495683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:56.164467 containerd[1582]: time="2025-11-24T01:47:56.164317550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 01:47:56.164467 containerd[1582]: time="2025-11-24T01:47:56.164354718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 01:47:56.167056 kubelet[2900]: E1124 01:47:56.166682 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:47:56.167604 kubelet[2900]: E1124 01:47:56.167076 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 01:47:56.167848 kubelet[2900]: E1124 01:47:56.167543 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhpzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9g82_calico-system(bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:56.179804 kubelet[2900]: E1124 01:47:56.179692 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:47:58.338570 systemd[1]: Started sshd@15-10.230.76.74:22-139.178.68.195:52024.service - OpenSSH per-connection server daemon (139.178.68.195:52024). Nov 24 01:47:58.843916 containerd[1582]: time="2025-11-24T01:47:58.843853527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 01:47:59.159931 containerd[1582]: time="2025-11-24T01:47:59.159841831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:59.161276 containerd[1582]: time="2025-11-24T01:47:59.161220244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 01:47:59.161408 containerd[1582]: time="2025-11-24T01:47:59.161336952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 01:47:59.161900 kubelet[2900]: E1124 01:47:59.161835 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:47:59.163746 kubelet[2900]: E1124 01:47:59.161910 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 01:47:59.163746 kubelet[2900]: E1124 01:47:59.163444 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:238b70b5880e4fee8021234d1dfe7af1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:59.166845 containerd[1582]: time="2025-11-24T01:47:59.166770498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 01:47:59.271729 sshd[5332]: Accepted publickey for core from 139.178.68.195 port 52024 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:47:59.273704 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:47:59.282090 systemd-logind[1561]: New session 18 of user core. Nov 24 01:47:59.291046 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 01:47:59.501880 containerd[1582]: time="2025-11-24T01:47:59.501703982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:47:59.503568 containerd[1582]: time="2025-11-24T01:47:59.503454415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 01:47:59.503900 containerd[1582]: time="2025-11-24T01:47:59.503487154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 01:47:59.504658 kubelet[2900]: E1124 01:47:59.504285 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:47:59.504658 kubelet[2900]: E1124 01:47:59.504355 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 01:47:59.504658 kubelet[2900]: E1124 01:47:59.504547 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxhbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cb9c49d6-kkt7h_calico-system(27e49083-c6b3-42ca-b3d9-4e1cc74718c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 01:47:59.505987 kubelet[2900]: E1124 01:47:59.505912 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:47:59.848720 containerd[1582]: time="2025-11-24T01:47:59.847814867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 01:48:00.016049 sshd[5335]: Connection closed by 139.178.68.195 port 52024 Nov 24 01:48:00.017240 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:00.024900 systemd[1]: sshd@15-10.230.76.74:22-139.178.68.195:52024.service: Deactivated successfully. Nov 24 01:48:00.031232 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 01:48:00.034126 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Nov 24 01:48:00.037479 systemd-logind[1561]: Removed session 18. Nov 24 01:48:00.155151 containerd[1582]: time="2025-11-24T01:48:00.155100250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:48:00.156472 containerd[1582]: time="2025-11-24T01:48:00.156381895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 01:48:00.156472 containerd[1582]: time="2025-11-24T01:48:00.156435700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 01:48:00.157635 kubelet[2900]: E1124 01:48:00.157088 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:48:00.157635 kubelet[2900]: E1124 01:48:00.157170 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 01:48:00.157974 kubelet[2900]: E1124 01:48:00.157710 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 01:48:00.161787 containerd[1582]: time="2025-11-24T01:48:00.161600241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 01:48:00.479131 containerd[1582]: time="2025-11-24T01:48:00.478968476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:48:00.481797 containerd[1582]: time="2025-11-24T01:48:00.481678188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 01:48:00.481973 containerd[1582]: time="2025-11-24T01:48:00.481795450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 01:48:00.482478 kubelet[2900]: E1124 01:48:00.482409 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:48:00.483445 kubelet[2900]: E1124 01:48:00.483098 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 01:48:00.483445 kubelet[2900]: E1124 01:48:00.483348 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9784g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dc98b_calico-system(d4c21c8f-271a-4e0d-ab8d-b3169fe61687): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 01:48:00.484901 kubelet[2900]: E1124 01:48:00.484850 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:48:02.842431 kubelet[2900]: E1124 01:48:02.842367 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:48:05.180972 systemd[1]: Started sshd@16-10.230.76.74:22-139.178.68.195:60382.service - OpenSSH per-connection server daemon (139.178.68.195:60382). Nov 24 01:48:05.845792 kubelet[2900]: E1124 01:48:05.845134 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:48:06.109123 sshd[5367]: Accepted publickey for core from 139.178.68.195 port 60382 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:06.111582 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:06.122271 systemd-logind[1561]: New session 19 of user core. Nov 24 01:48:06.128745 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 01:48:06.844013 containerd[1582]: time="2025-11-24T01:48:06.843804159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 01:48:06.876142 sshd[5370]: Connection closed by 139.178.68.195 port 60382 Nov 24 01:48:06.877853 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:06.887064 systemd[1]: sshd@16-10.230.76.74:22-139.178.68.195:60382.service: Deactivated successfully. Nov 24 01:48:06.894578 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 01:48:06.897883 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Nov 24 01:48:06.902372 systemd-logind[1561]: Removed session 19. Nov 24 01:48:07.044989 systemd[1]: Started sshd@17-10.230.76.74:22-139.178.68.195:60396.service - OpenSSH per-connection server daemon (139.178.68.195:60396). Nov 24 01:48:07.177053 containerd[1582]: time="2025-11-24T01:48:07.176817220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 01:48:07.178233 containerd[1582]: time="2025-11-24T01:48:07.178128087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 01:48:07.178453 containerd[1582]: time="2025-11-24T01:48:07.178413716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 01:48:07.178959 kubelet[2900]: E1124 01:48:07.178835 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:48:07.178959 kubelet[2900]: E1124 01:48:07.178908 2900 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 01:48:07.180999 kubelet[2900]: E1124 01:48:07.180922 2900 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8mnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f5d79d7cd-lvqjg_calico-system(621aa00e-6d25-484a-b356-0b520628e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 01:48:07.182485 kubelet[2900]: E1124 01:48:07.182419 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:48:07.848184 kubelet[2900]: E1124 01:48:07.847858 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:48:07.983748 sshd[5382]: Accepted publickey for core from 139.178.68.195 port 60396 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:07.985801 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:07.994779 systemd-logind[1561]: New session 20 of user core. Nov 24 01:48:08.002927 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 01:48:09.286688 sshd[5385]: Connection closed by 139.178.68.195 port 60396 Nov 24 01:48:09.300702 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:09.311528 systemd[1]: sshd@17-10.230.76.74:22-139.178.68.195:60396.service: Deactivated successfully. Nov 24 01:48:09.316585 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 01:48:09.318843 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Nov 24 01:48:09.322173 systemd-logind[1561]: Removed session 20. Nov 24 01:48:09.451952 systemd[1]: Started sshd@18-10.230.76.74:22-139.178.68.195:60410.service - OpenSSH per-connection server daemon (139.178.68.195:60410). Nov 24 01:48:10.415786 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 60410 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:10.418846 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:10.430492 systemd-logind[1561]: New session 21 of user core. Nov 24 01:48:10.433908 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 01:48:11.852643 kubelet[2900]: E1124 01:48:11.852251 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:48:12.161347 sshd[5400]: Connection closed by 139.178.68.195 port 60410 Nov 24 01:48:12.161645 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:12.174268 systemd[1]: sshd@18-10.230.76.74:22-139.178.68.195:60410.service: Deactivated successfully. Nov 24 01:48:12.181413 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 01:48:12.183809 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Nov 24 01:48:12.187184 systemd-logind[1561]: Removed session 21. Nov 24 01:48:12.322724 systemd[1]: Started sshd@19-10.230.76.74:22-139.178.68.195:38792.service - OpenSSH per-connection server daemon (139.178.68.195:38792). Nov 24 01:48:13.279608 sshd[5417]: Accepted publickey for core from 139.178.68.195 port 38792 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:13.282172 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:13.291987 systemd-logind[1561]: New session 22 of user core. Nov 24 01:48:13.295911 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 01:48:13.850027 kubelet[2900]: E1124 01:48:13.849725 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:48:14.360812 sshd[5420]: Connection closed by 139.178.68.195 port 38792 Nov 24 01:48:14.361713 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:14.371148 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Nov 24 01:48:14.373064 systemd[1]: sshd@19-10.230.76.74:22-139.178.68.195:38792.service: Deactivated successfully. Nov 24 01:48:14.380818 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 01:48:14.386599 systemd-logind[1561]: Removed session 22. Nov 24 01:48:14.525170 systemd[1]: Started sshd@20-10.230.76.74:22-139.178.68.195:38800.service - OpenSSH per-connection server daemon (139.178.68.195:38800). Nov 24 01:48:15.454315 sshd[5430]: Accepted publickey for core from 139.178.68.195 port 38800 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:15.458040 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:15.476766 systemd-logind[1561]: New session 23 of user core. Nov 24 01:48:15.481119 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 01:48:15.855579 kubelet[2900]: E1124 01:48:15.855071 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687" Nov 24 01:48:16.226697 sshd[5433]: Connection closed by 139.178.68.195 port 38800 Nov 24 01:48:16.227210 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:16.234598 systemd[1]: sshd@20-10.230.76.74:22-139.178.68.195:38800.service: Deactivated successfully. Nov 24 01:48:16.241327 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 01:48:16.246057 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Nov 24 01:48:16.249989 systemd-logind[1561]: Removed session 23. Nov 24 01:48:16.843278 kubelet[2900]: E1124 01:48:16.843109 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:48:18.842152 kubelet[2900]: E1124 01:48:18.842099 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:48:21.392743 systemd[1]: Started sshd@21-10.230.76.74:22-139.178.68.195:33158.service - OpenSSH per-connection server daemon (139.178.68.195:33158). Nov 24 01:48:21.846062 kubelet[2900]: E1124 01:48:21.845595 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f5d79d7cd-lvqjg" podUID="621aa00e-6d25-484a-b356-0b520628e4b2" Nov 24 01:48:22.314804 sshd[5444]: Accepted publickey for core from 139.178.68.195 port 33158 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:22.316378 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:22.325339 systemd-logind[1561]: New session 24 of user core. Nov 24 01:48:22.332477 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 01:48:22.844808 kubelet[2900]: E1124 01:48:22.844130 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cb9c49d6-kkt7h" podUID="27e49083-c6b3-42ca-b3d9-4e1cc74718c7" Nov 24 01:48:23.112794 sshd[5449]: Connection closed by 139.178.68.195 port 33158 Nov 24 01:48:23.114895 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:23.122522 systemd[1]: sshd@21-10.230.76.74:22-139.178.68.195:33158.service: Deactivated successfully. Nov 24 01:48:23.128208 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 01:48:23.130947 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Nov 24 01:48:23.136415 systemd-logind[1561]: Removed session 24. Nov 24 01:48:25.844434 kubelet[2900]: E1124 01:48:25.844190 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-jvhv5" podUID="af4627b6-c7f1-489e-8935-b4a50923c295" Nov 24 01:48:28.273611 systemd[1]: Started sshd@22-10.230.76.74:22-139.178.68.195:33174.service - OpenSSH per-connection server daemon (139.178.68.195:33174). Nov 24 01:48:29.205849 sshd[5486]: Accepted publickey for core from 139.178.68.195 port 33174 ssh2: RSA SHA256:Kq6jONEkGxUYP/MRnXx9e/YvsDCsIU+M7abtaXrWMoY Nov 24 01:48:29.207220 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 01:48:29.216223 systemd-logind[1561]: New session 25 of user core. Nov 24 01:48:29.222139 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 01:48:29.843643 kubelet[2900]: E1124 01:48:29.842949 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9g82" podUID="bda40ac0-fe5e-4a6c-924c-3a5c697eb0d1" Nov 24 01:48:29.960897 sshd[5489]: Connection closed by 139.178.68.195 port 33174 Nov 24 01:48:29.961960 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Nov 24 01:48:29.971037 systemd[1]: sshd@22-10.230.76.74:22-139.178.68.195:33174.service: Deactivated successfully. Nov 24 01:48:29.976432 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 01:48:29.980658 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Nov 24 01:48:29.984909 systemd-logind[1561]: Removed session 25. Nov 24 01:48:30.848522 kubelet[2900]: E1124 01:48:30.845969 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-687d5b8b8f-7nccl" podUID="5debaa4f-5f0a-45c9-bc91-84f4de6609a5" Nov 24 01:48:30.853194 kubelet[2900]: E1124 01:48:30.852835 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dc98b" podUID="d4c21c8f-271a-4e0d-ab8d-b3169fe61687"