Nov 1 01:57:07.918044 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:57:07.918087 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:57:07.918098 kernel: BIOS-provided physical RAM map: Nov 1 01:57:07.918109 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 01:57:07.918116 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 01:57:07.918124 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 01:57:07.918132 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 1 01:57:07.918140 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 1 01:57:07.918147 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 01:57:07.918155 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 01:57:07.918162 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 01:57:07.918170 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 01:57:07.918180 kernel: NX (Execute Disable) protection: active Nov 1 01:57:07.918188 kernel: APIC: Static calls initialized Nov 1 01:57:07.918198 kernel: SMBIOS 2.8 present. Nov 1 01:57:07.918207 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 1 01:57:07.918215 kernel: Hypervisor detected: KVM Nov 1 01:57:07.918227 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 01:57:07.918236 kernel: kvm-clock: using sched offset of 3856269356 cycles Nov 1 01:57:07.918245 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 01:57:07.918254 kernel: tsc: Detected 2294.608 MHz processor Nov 1 01:57:07.918263 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:57:07.918271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:57:07.918280 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 01:57:07.918289 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 01:57:07.918297 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:57:07.918308 kernel: Using GB pages for direct mapping Nov 1 01:57:07.918317 kernel: ACPI: Early table checksum verification disabled Nov 1 01:57:07.918325 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 1 01:57:07.918334 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918343 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918351 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918359 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 1 01:57:07.918368 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918376 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918388 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918396 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:57:07.918405 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 1 01:57:07.918413 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 1 01:57:07.918422 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 1 01:57:07.918435 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 1 01:57:07.919459 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 1 01:57:07.919474 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 1 01:57:07.919484 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 1 01:57:07.919493 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 01:57:07.919502 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 01:57:07.919511 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 1 01:57:07.919520 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 1 01:57:07.919529 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 1 01:57:07.919538 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 1 01:57:07.919550 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 1 01:57:07.919559 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 1 01:57:07.919567 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 1 01:57:07.919576 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 1 01:57:07.919585 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 1 01:57:07.919594 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 1 01:57:07.919603 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 1 01:57:07.919612 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 1 01:57:07.919621 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 1 01:57:07.919633 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 1 01:57:07.919642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 01:57:07.919651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 01:57:07.919660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 1 01:57:07.919669 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 1 01:57:07.919678 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 1 01:57:07.919688 kernel: Zone ranges: Nov 1 01:57:07.919697 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:57:07.919706 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 1 01:57:07.919718 kernel: Normal empty Nov 1 01:57:07.919727 kernel: Movable zone start for each node Nov 1 01:57:07.919736 kernel: Early memory node ranges Nov 1 01:57:07.919745 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 01:57:07.919754 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 1 01:57:07.919763 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 1 01:57:07.919772 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:57:07.919781 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 01:57:07.919790 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 1 01:57:07.919799 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 01:57:07.919812 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 01:57:07.919821 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 01:57:07.919830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:57:07.919839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 01:57:07.919848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:57:07.919857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 01:57:07.919866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 01:57:07.919875 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:57:07.919884 kernel: TSC deadline timer available Nov 1 01:57:07.919896 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 1 01:57:07.919905 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 01:57:07.919914 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 01:57:07.919923 kernel: Booting paravirtualized kernel on KVM Nov 1 01:57:07.919933 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:57:07.919942 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:57:07.919951 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:57:07.919960 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:57:07.919969 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:57:07.919991 kernel: kvm-guest: PV spinlocks enabled Nov 1 01:57:07.919999 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 01:57:07.920008 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:57:07.920017 kernel: random: crng init done Nov 1 01:57:07.920025 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:57:07.920033 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 01:57:07.920042 kernel: Fallback order for Node 0: 0 Nov 1 01:57:07.920050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 1 01:57:07.920061 kernel: Policy zone: DMA32 Nov 1 01:57:07.920069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:57:07.920083 kernel: software IO TLB: area num 16. Nov 1 01:57:07.920091 kernel: Memory: 1901540K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 194816K reserved, 0K cma-reserved) Nov 1 01:57:07.920115 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:57:07.920124 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:57:07.920134 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:57:07.920143 kernel: Dynamic Preempt: voluntary Nov 1 01:57:07.920152 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:57:07.920166 kernel: rcu: RCU event tracing is enabled. Nov 1 01:57:07.920175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:57:07.920185 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:57:07.920194 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:57:07.920204 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:57:07.920226 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:57:07.920235 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:57:07.920245 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 1 01:57:07.920255 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:57:07.920264 kernel: Console: colour VGA+ 80x25 Nov 1 01:57:07.920273 kernel: printk: console [tty0] enabled Nov 1 01:57:07.920283 kernel: printk: console [ttyS0] enabled Nov 1 01:57:07.920297 kernel: ACPI: Core revision 20230628 Nov 1 01:57:07.920307 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:57:07.920316 kernel: x2apic enabled Nov 1 01:57:07.920326 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 01:57:07.920336 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Nov 1 01:57:07.920348 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Nov 1 01:57:07.920358 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 01:57:07.920368 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 01:57:07.920378 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 01:57:07.920387 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:57:07.920397 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:57:07.920406 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:57:07.920416 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:57:07.920426 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:57:07.920435 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:57:07.920448 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:57:07.922462 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:57:07.922476 kernel: TAA: Mitigation: Clear CPU buffers Nov 1 01:57:07.922487 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 01:57:07.922497 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 01:57:07.922507 kernel: active return thunk: its_return_thunk Nov 1 01:57:07.922516 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:57:07.922526 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:57:07.922536 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:57:07.922546 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:57:07.922555 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 01:57:07.922570 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 01:57:07.922580 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 01:57:07.922590 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 01:57:07.922600 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:57:07.922609 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 01:57:07.922619 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 01:57:07.922629 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 01:57:07.922639 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Nov 1 01:57:07.922648 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Nov 1 01:57:07.922658 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:57:07.922668 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:57:07.922677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:57:07.922690 kernel: landlock: Up and running. Nov 1 01:57:07.922700 kernel: SELinux: Initializing. Nov 1 01:57:07.922709 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 01:57:07.922719 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 01:57:07.922729 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Nov 1 01:57:07.922739 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:57:07.922749 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:57:07.922759 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:57:07.922769 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 01:57:07.922782 kernel: signal: max sigframe size: 3632 Nov 1 01:57:07.922792 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:57:07.922802 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:57:07.922812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 01:57:07.922821 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:57:07.922831 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:57:07.922841 kernel: .... node #0, CPUs: #1 Nov 1 01:57:07.922851 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 1 01:57:07.922860 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 01:57:07.922870 kernel: smpboot: Max logical packages: 16 Nov 1 01:57:07.922883 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Nov 1 01:57:07.922893 kernel: devtmpfs: initialized Nov 1 01:57:07.922903 kernel: x86/mm: Memory block size: 128MB Nov 1 01:57:07.922913 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:57:07.922922 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:57:07.922932 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:57:07.922942 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:57:07.922952 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:57:07.922962 kernel: audit: type=2000 audit(1761962227.254:1): state=initialized audit_enabled=0 res=1 Nov 1 01:57:07.922975 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:57:07.922985 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:57:07.922995 kernel: cpuidle: using governor menu Nov 1 01:57:07.923005 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:57:07.923014 kernel: dca service started, version 1.12.1 Nov 1 01:57:07.923024 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 01:57:07.923034 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 01:57:07.923044 kernel: PCI: Using configuration type 1 for base access Nov 1 01:57:07.923053 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:57:07.923066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:57:07.923082 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:57:07.923092 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:57:07.923101 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:57:07.923111 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:57:07.923121 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:57:07.923130 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:57:07.923140 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 01:57:07.923150 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 01:57:07.923163 kernel: ACPI: Interpreter enabled Nov 1 01:57:07.923173 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:57:07.923183 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:57:07.923193 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:57:07.923202 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 01:57:07.923212 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 01:57:07.923222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 01:57:07.923416 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:57:07.923547 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 01:57:07.923642 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 01:57:07.923655 kernel: PCI host bridge to bus 0000:00 Nov 1 01:57:07.923768 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:57:07.923853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:57:07.923937 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:57:07.924021 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 01:57:07.924118 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 01:57:07.924202 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 1 01:57:07.924286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 01:57:07.924411 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 01:57:07.924528 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 1 01:57:07.924626 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 1 01:57:07.924726 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 1 01:57:07.924819 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 1 01:57:07.924913 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:57:07.925020 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.925123 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 1 01:57:07.925228 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.925324 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 1 01:57:07.925436 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.927592 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 1 01:57:07.927706 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.927805 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 1 01:57:07.927915 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.928011 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 1 01:57:07.928141 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.928241 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 1 01:57:07.928346 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.928452 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 1 01:57:07.928555 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 01:57:07.928654 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 1 01:57:07.928768 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 01:57:07.928864 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 01:57:07.928958 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 1 01:57:07.929052 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 01:57:07.929155 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 1 01:57:07.929259 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 1 01:57:07.929356 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 01:57:07.932419 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 1 01:57:07.932559 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 1 01:57:07.932656 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 01:57:07.932743 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 01:57:07.932836 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 01:57:07.932921 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 1 01:57:07.933036 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 1 01:57:07.933145 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 01:57:07.933239 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 01:57:07.933367 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 1 01:57:07.933465 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 1 01:57:07.933586 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 01:57:07.933685 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 01:57:07.933777 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 01:57:07.933889 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 01:57:07.933999 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 1 01:57:07.934108 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 1 01:57:07.934208 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 01:57:07.934306 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 01:57:07.934418 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 01:57:07.937779 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 1 01:57:07.937887 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 01:57:07.937976 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 01:57:07.938063 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 01:57:07.938169 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 01:57:07.938258 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 01:57:07.938354 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 01:57:07.938482 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 01:57:07.938578 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 01:57:07.938667 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 01:57:07.938752 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 01:57:07.938857 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 01:57:07.938953 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 01:57:07.939047 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 01:57:07.939150 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 01:57:07.939249 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 01:57:07.939342 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 01:57:07.939437 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 01:57:07.940051 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 01:57:07.940157 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 01:57:07.940252 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 01:57:07.940349 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 01:57:07.940463 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 01:57:07.940556 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 01:57:07.940570 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 01:57:07.940581 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 01:57:07.940591 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 01:57:07.940601 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 01:57:07.940611 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 01:57:07.940621 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 01:57:07.940631 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 01:57:07.940646 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 01:57:07.940656 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 01:57:07.940666 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 01:57:07.940676 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 01:57:07.940687 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 01:57:07.940697 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 01:57:07.940706 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 01:57:07.940716 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 01:57:07.940726 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 01:57:07.940740 kernel: iommu: Default domain type: Translated Nov 1 01:57:07.940750 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:57:07.940760 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:57:07.940770 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:57:07.940780 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 01:57:07.940789 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 1 01:57:07.940885 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 01:57:07.940978 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 01:57:07.941083 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:57:07.941096 kernel: vgaarb: loaded Nov 1 01:57:07.941106 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 01:57:07.941117 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:57:07.941127 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:57:07.941137 kernel: pnp: PnP ACPI init Nov 1 01:57:07.941242 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 01:57:07.941257 kernel: pnp: PnP ACPI: found 5 devices Nov 1 01:57:07.941267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:57:07.941281 kernel: NET: Registered PF_INET protocol family Nov 1 01:57:07.941291 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 01:57:07.941302 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 01:57:07.941312 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:57:07.941322 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 01:57:07.941332 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:57:07.941342 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 01:57:07.941352 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 01:57:07.941365 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 01:57:07.941375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:57:07.941385 kernel: NET: Registered PF_XDP protocol family Nov 1 01:57:07.943028 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 1 01:57:07.943152 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 1 01:57:07.943252 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 1 01:57:07.943352 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 1 01:57:07.945522 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 01:57:07.945646 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 01:57:07.945749 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 01:57:07.945853 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 01:57:07.945976 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 01:57:07.946078 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 01:57:07.946173 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 01:57:07.946274 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 1 01:57:07.946368 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 1 01:57:07.946477 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 1 01:57:07.946571 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 1 01:57:07.946664 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 1 01:57:07.946763 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 01:57:07.946862 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 01:57:07.946956 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 01:57:07.947054 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 1 01:57:07.947155 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 01:57:07.947253 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 01:57:07.947346 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 01:57:07.947439 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 1 01:57:07.947548 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 01:57:07.947642 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 01:57:07.947736 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 01:57:07.947830 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 1 01:57:07.947925 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 01:57:07.948018 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 01:57:07.948120 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 01:57:07.948214 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 1 01:57:07.948308 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 01:57:07.948402 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 01:57:07.950533 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 01:57:07.950638 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 1 01:57:07.950728 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 01:57:07.950815 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 01:57:07.950903 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 01:57:07.950990 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 1 01:57:07.951083 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 01:57:07.951169 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 01:57:07.951279 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 01:57:07.951379 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 1 01:57:07.953480 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 01:57:07.953585 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 01:57:07.953677 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 01:57:07.953764 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 1 01:57:07.953856 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 01:57:07.953941 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 01:57:07.954028 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:57:07.954113 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:57:07.954189 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:57:07.954265 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 01:57:07.954339 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 01:57:07.954414 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 1 01:57:07.954540 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 1 01:57:07.954621 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 1 01:57:07.954700 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 01:57:07.954810 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 01:57:07.954915 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 1 01:57:07.955005 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 01:57:07.955104 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 01:57:07.955200 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 1 01:57:07.955287 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 01:57:07.955374 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 01:57:07.955476 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 01:57:07.955565 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 01:57:07.955652 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 01:57:07.955756 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 1 01:57:07.955845 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 01:57:07.955932 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 01:57:07.956029 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 1 01:57:07.956125 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 01:57:07.956212 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 01:57:07.956308 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 1 01:57:07.956400 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 01:57:07.958547 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 01:57:07.958657 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 1 01:57:07.958748 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 01:57:07.958836 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 01:57:07.958852 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 01:57:07.958863 kernel: PCI: CLS 0 bytes, default 64 Nov 1 01:57:07.958879 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:57:07.958890 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 1 01:57:07.958901 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 01:57:07.958922 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Nov 1 01:57:07.958932 kernel: Initialise system trusted keyrings Nov 1 01:57:07.958942 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 01:57:07.958952 kernel: Key type asymmetric registered Nov 1 01:57:07.958961 kernel: Asymmetric key parser 'x509' registered Nov 1 01:57:07.958971 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:57:07.958984 kernel: io scheduler mq-deadline registered Nov 1 01:57:07.958993 kernel: io scheduler kyber registered Nov 1 01:57:07.959003 kernel: io scheduler bfq registered Nov 1 01:57:07.959101 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 01:57:07.959189 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 01:57:07.959276 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.959363 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 01:57:07.959464 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 01:57:07.959551 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.959654 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 01:57:07.959753 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 01:57:07.959848 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.959934 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 01:57:07.960022 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 01:57:07.960116 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.960202 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 01:57:07.960287 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 01:57:07.960373 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.962533 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 01:57:07.962650 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 01:57:07.962751 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.962838 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 01:57:07.962922 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 01:57:07.963007 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.963117 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 01:57:07.963215 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 01:57:07.963316 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:57:07.963329 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:57:07.963340 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 01:57:07.963350 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 01:57:07.963360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:57:07.963370 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:57:07.963380 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 01:57:07.963393 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 01:57:07.963402 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 01:57:07.963412 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 01:57:07.963521 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 01:57:07.963602 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 01:57:07.963681 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T01:57:07 UTC (1761962227) Nov 1 01:57:07.963758 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 01:57:07.963775 kernel: intel_pstate: CPU model not supported Nov 1 01:57:07.963785 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:57:07.963795 kernel: Segment Routing with IPv6 Nov 1 01:57:07.963805 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:57:07.963814 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:57:07.963824 kernel: Key type dns_resolver registered Nov 1 01:57:07.963834 kernel: IPI shorthand broadcast: enabled Nov 1 01:57:07.963844 kernel: sched_clock: Marking stable (1032002195, 120405994)->(1259436173, -107027984) Nov 1 01:57:07.963854 kernel: registered taskstats version 1 Nov 1 01:57:07.963864 kernel: Loading compiled-in X.509 certificates Nov 1 01:57:07.963877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:57:07.963886 kernel: Key type .fscrypt registered Nov 1 01:57:07.963895 kernel: Key type fscrypt-provisioning registered Nov 1 01:57:07.963905 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 01:57:07.963914 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:57:07.963924 kernel: ima: No architecture policies found Nov 1 01:57:07.963933 kernel: clk: Disabling unused clocks Nov 1 01:57:07.963943 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:57:07.963953 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:57:07.963966 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:57:07.963976 kernel: Run /init as init process Nov 1 01:57:07.963985 kernel: with arguments: Nov 1 01:57:07.963995 kernel: /init Nov 1 01:57:07.964004 kernel: with environment: Nov 1 01:57:07.964013 kernel: HOME=/ Nov 1 01:57:07.964022 kernel: TERM=linux Nov 1 01:57:07.964034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:57:07.964051 systemd[1]: Detected virtualization kvm. Nov 1 01:57:07.964061 systemd[1]: Detected architecture x86-64. Nov 1 01:57:07.964077 systemd[1]: Running in initrd. Nov 1 01:57:07.964087 systemd[1]: No hostname configured, using default hostname. Nov 1 01:57:07.964096 systemd[1]: Hostname set to . Nov 1 01:57:07.964107 systemd[1]: Initializing machine ID from VM UUID. Nov 1 01:57:07.964117 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:57:07.964127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:57:07.964140 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:57:07.964151 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:57:07.964161 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:57:07.964171 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:57:07.964181 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:57:07.964193 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:57:07.964203 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:57:07.964217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:57:07.964227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:57:07.964237 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:57:07.964247 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:57:07.964257 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:57:07.964267 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:57:07.964278 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:57:07.964289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:57:07.964302 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:57:07.964312 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:57:07.964322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:57:07.964332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:57:07.964342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:57:07.964352 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:57:07.964362 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:57:07.964372 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:57:07.964382 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:57:07.964396 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:57:07.964406 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:57:07.964416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:57:07.964426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:57:07.964436 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:57:07.969801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:57:07.969855 systemd-journald[202]: Collecting audit messages is disabled. Nov 1 01:57:07.969888 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:57:07.969899 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:57:07.969913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:57:07.969925 systemd-journald[202]: Journal started Nov 1 01:57:07.969948 systemd-journald[202]: Runtime Journal (/run/log/journal/dfbb68991c564196a9ece6e1f316e46a) is 4.7M, max 38.0M, 33.2M free. Nov 1 01:57:07.941556 systemd-modules-load[203]: Inserted module 'overlay' Nov 1 01:57:07.986458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:57:07.989430 systemd-modules-load[203]: Inserted module 'br_netfilter' Nov 1 01:57:07.991755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:57:07.991779 kernel: Bridge firewalling registered Nov 1 01:57:07.991795 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:57:07.992777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:57:07.993361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:57:08.002640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:57:08.006366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:57:08.009323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:57:08.014112 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:57:08.015840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:57:08.021588 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:57:08.028717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:57:08.030059 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:57:08.038500 dracut-cmdline[234]: dracut-dracut-053 Nov 1 01:57:08.039214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:57:08.041455 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:57:08.085031 systemd-resolved[241]: Positive Trust Anchors: Nov 1 01:57:08.085053 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:57:08.085103 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:57:08.089279 systemd-resolved[241]: Defaulting to hostname 'linux'. Nov 1 01:57:08.090531 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:57:08.092025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:57:08.155509 kernel: SCSI subsystem initialized Nov 1 01:57:08.166483 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:57:08.178591 kernel: iscsi: registered transport (tcp) Nov 1 01:57:08.200498 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:57:08.200602 kernel: QLogic iSCSI HBA Driver Nov 1 01:57:08.270643 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:57:08.278596 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:57:08.306698 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:57:08.306793 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:57:08.306810 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:57:08.353519 kernel: raid6: avx512x4 gen() 28384 MB/s Nov 1 01:57:08.370499 kernel: raid6: avx512x2 gen() 28593 MB/s Nov 1 01:57:08.387607 kernel: raid6: avx512x1 gen() 27596 MB/s Nov 1 01:57:08.404519 kernel: raid6: avx2x4 gen() 21681 MB/s Nov 1 01:57:08.421532 kernel: raid6: avx2x2 gen() 21408 MB/s Nov 1 01:57:08.438560 kernel: raid6: avx2x1 gen() 18736 MB/s Nov 1 01:57:08.438700 kernel: raid6: using algorithm avx512x2 gen() 28593 MB/s Nov 1 01:57:08.456628 kernel: raid6: .... xor() 22153 MB/s, rmw enabled Nov 1 01:57:08.456747 kernel: raid6: using avx512x2 recovery algorithm Nov 1 01:57:08.489505 kernel: xor: automatically using best checksumming function avx Nov 1 01:57:08.668789 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:57:08.689046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:57:08.695668 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:57:08.717143 systemd-udevd[421]: Using default interface naming scheme 'v255'. Nov 1 01:57:08.722618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:57:08.730676 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:57:08.765382 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Nov 1 01:57:08.811098 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:57:08.816604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:57:08.880971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:57:08.890622 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:57:08.912581 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:57:08.914656 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:57:08.916658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:57:08.917845 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:57:08.925598 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:57:08.943330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:57:08.981470 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:57:08.990472 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 1 01:57:08.993470 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 01:57:09.007811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:57:09.007938 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:57:09.010542 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:57:09.012138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:57:09.012280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:57:09.012736 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:57:09.027381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:57:09.030700 kernel: ACPI: bus type USB registered Nov 1 01:57:09.030727 kernel: usbcore: registered new interface driver usbfs Nov 1 01:57:09.032476 kernel: usbcore: registered new interface driver hub Nov 1 01:57:09.039296 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:57:09.039335 kernel: GPT:17805311 != 125829119 Nov 1 01:57:09.039350 kernel: usbcore: registered new device driver usb Nov 1 01:57:09.039363 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:57:09.039376 kernel: GPT:17805311 != 125829119 Nov 1 01:57:09.039388 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:57:09.039400 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:57:09.044516 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:57:09.050485 kernel: AES CTR mode by8 optimization enabled Nov 1 01:57:09.075486 kernel: libata version 3.00 loaded. Nov 1 01:57:09.106713 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:57:09.120543 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Nov 1 01:57:09.123484 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 01:57:09.123698 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 01:57:09.127172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 01:57:09.131840 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (468) Nov 1 01:57:09.134882 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 01:57:09.136757 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 01:57:09.136399 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 01:57:09.140950 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 01:57:09.141135 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 1 01:57:09.144658 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 01:57:09.146702 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:57:09.148475 kernel: scsi host0: ahci Nov 1 01:57:09.150469 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 01:57:09.152472 kernel: scsi host1: ahci Nov 1 01:57:09.155533 kernel: scsi host2: ahci Nov 1 01:57:09.155686 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 1 01:57:09.159471 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 1 01:57:09.159625 kernel: scsi host3: ahci Nov 1 01:57:09.161590 kernel: hub 1-0:1.0: USB hub found Nov 1 01:57:09.161773 kernel: hub 1-0:1.0: 4 ports detected Nov 1 01:57:09.164336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 01:57:09.180307 kernel: scsi host4: ahci Nov 1 01:57:09.180507 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 01:57:09.180655 kernel: hub 2-0:1.0: USB hub found Nov 1 01:57:09.180789 kernel: scsi host5: ahci Nov 1 01:57:09.180908 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Nov 1 01:57:09.180929 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Nov 1 01:57:09.180943 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Nov 1 01:57:09.180969 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Nov 1 01:57:09.180982 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Nov 1 01:57:09.180995 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Nov 1 01:57:09.181008 kernel: hub 2-0:1.0: 4 ports detected Nov 1 01:57:09.180326 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 01:57:09.180809 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 01:57:09.181531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:57:09.188700 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:57:09.196001 disk-uuid[576]: Primary Header is updated. Nov 1 01:57:09.196001 disk-uuid[576]: Secondary Entries is updated. Nov 1 01:57:09.196001 disk-uuid[576]: Secondary Header is updated. Nov 1 01:57:09.199468 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:57:09.204492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:57:09.415538 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 01:57:09.486496 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:57:09.486623 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 01:57:09.492125 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:57:09.492210 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 01:57:09.494703 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:57:09.498501 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:57:09.566485 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:57:09.571566 kernel: usbcore: registered new interface driver usbhid Nov 1 01:57:09.571645 kernel: usbhid: USB HID core driver Nov 1 01:57:09.577247 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 01:57:09.577313 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 1 01:57:10.210483 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:57:10.213859 disk-uuid[577]: The operation has completed successfully. Nov 1 01:57:10.250457 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:57:10.251214 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:57:10.269952 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:57:10.290581 sh[589]: Success Nov 1 01:57:10.307496 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:57:10.379772 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:57:10.383242 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:57:10.400606 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:57:10.413542 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:57:10.413634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:57:10.414758 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:57:10.416570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:57:10.416624 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:57:10.424697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:57:10.425908 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:57:10.437810 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:57:10.443607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:57:10.456543 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:57:10.456582 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:57:10.456595 kernel: BTRFS info (device vda6): using free space tree Nov 1 01:57:10.461573 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 01:57:10.472969 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 01:57:10.473604 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:57:10.481420 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:57:10.487603 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:57:10.569806 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:57:10.581912 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:57:10.600520 ignition[690]: Ignition 2.19.0 Nov 1 01:57:10.600533 ignition[690]: Stage: fetch-offline Nov 1 01:57:10.600593 ignition[690]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:10.600609 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:10.604126 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:57:10.600735 ignition[690]: parsed url from cmdline: "" Nov 1 01:57:10.600739 ignition[690]: no config URL provided Nov 1 01:57:10.600744 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:57:10.600753 ignition[690]: no config at "/usr/lib/ignition/user.ign" Nov 1 01:57:10.600758 ignition[690]: failed to fetch config: resource requires networking Nov 1 01:57:10.600974 ignition[690]: Ignition finished successfully Nov 1 01:57:10.624601 systemd-networkd[772]: lo: Link UP Nov 1 01:57:10.624618 systemd-networkd[772]: lo: Gained carrier Nov 1 01:57:10.626698 systemd-networkd[772]: Enumeration completed Nov 1 01:57:10.627250 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:57:10.627256 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:57:10.627722 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:57:10.628845 systemd-networkd[772]: eth0: Link UP Nov 1 01:57:10.628849 systemd-networkd[772]: eth0: Gained carrier Nov 1 01:57:10.628857 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:57:10.631743 systemd[1]: Reached target network.target - Network. Nov 1 01:57:10.641532 systemd-networkd[772]: eth0: DHCPv4 address 10.244.90.154/30, gateway 10.244.90.153 acquired from 10.244.90.153 Nov 1 01:57:10.643686 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 01:57:10.666036 ignition[779]: Ignition 2.19.0 Nov 1 01:57:10.666053 ignition[779]: Stage: fetch Nov 1 01:57:10.666305 ignition[779]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:10.666320 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:10.666472 ignition[779]: parsed url from cmdline: "" Nov 1 01:57:10.666477 ignition[779]: no config URL provided Nov 1 01:57:10.666484 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:57:10.666495 ignition[779]: no config at "/usr/lib/ignition/user.ign" Nov 1 01:57:10.666969 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 1 01:57:10.669149 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 1 01:57:10.670120 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 1 01:57:10.685138 ignition[779]: GET result: OK Nov 1 01:57:10.685245 ignition[779]: parsing config with SHA512: 12609c43d729fcc1d27ffd104d14dc70d6b1f0f295a1f45c80ba6fb4e11d22718381070e1ddf2c5b18db5d0a33202c65c3b6058145fcb1ded3da1c58e9d1be29 Nov 1 01:57:10.690828 unknown[779]: fetched base config from "system" Nov 1 01:57:10.690845 unknown[779]: fetched base config from "system" Nov 1 01:57:10.691393 ignition[779]: fetch: fetch complete Nov 1 01:57:10.690855 unknown[779]: fetched user config from "openstack" Nov 1 01:57:10.691401 ignition[779]: fetch: fetch passed Nov 1 01:57:10.691493 ignition[779]: Ignition finished successfully Nov 1 01:57:10.696737 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 01:57:10.702676 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:57:10.722091 ignition[787]: Ignition 2.19.0 Nov 1 01:57:10.722107 ignition[787]: Stage: kargs Nov 1 01:57:10.722307 ignition[787]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:10.722318 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:10.723277 ignition[787]: kargs: kargs passed Nov 1 01:57:10.724790 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:57:10.723321 ignition[787]: Ignition finished successfully Nov 1 01:57:10.733652 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:57:10.751544 ignition[793]: Ignition 2.19.0 Nov 1 01:57:10.751556 ignition[793]: Stage: disks Nov 1 01:57:10.751727 ignition[793]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:10.751737 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:10.752740 ignition[793]: disks: disks passed Nov 1 01:57:10.754978 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:57:10.752790 ignition[793]: Ignition finished successfully Nov 1 01:57:10.756073 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:57:10.756479 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:57:10.756926 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:57:10.757302 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:57:10.757745 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:57:10.766580 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:57:10.783657 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 01:57:10.787626 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:57:10.795573 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:57:10.901484 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:57:10.901351 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:57:10.902320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:57:10.911537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:57:10.913856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:57:10.914840 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 01:57:10.917621 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 1 01:57:10.918404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:57:10.918434 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:57:10.927083 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Nov 1 01:57:10.927108 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:57:10.927122 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:57:10.927135 kernel: BTRFS info (device vda6): using free space tree Nov 1 01:57:10.929475 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 01:57:10.934461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:57:10.942644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:57:10.956321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:57:10.998864 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:57:11.006423 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:57:11.013473 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:57:11.019014 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:57:11.119134 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:57:11.124564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:57:11.126622 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:57:11.136468 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:57:11.161626 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:57:11.173465 ignition[925]: INFO : Ignition 2.19.0 Nov 1 01:57:11.173465 ignition[925]: INFO : Stage: mount Nov 1 01:57:11.173465 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:11.173465 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:11.175899 ignition[925]: INFO : mount: mount passed Nov 1 01:57:11.176358 ignition[925]: INFO : Ignition finished successfully Nov 1 01:57:11.178367 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:57:11.415266 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:57:12.020594 systemd-networkd[772]: eth0: Gained IPv6LL Nov 1 01:57:13.531850 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:16a6:24:19ff:fef4:5a9a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:16a6:24:19ff:fef4:5a9a/64 assigned by NDisc. Nov 1 01:57:13.531874 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 01:57:18.093633 coreos-metadata[811]: Nov 01 01:57:18.093 WARN failed to locate config-drive, using the metadata service API instead Nov 1 01:57:18.113907 coreos-metadata[811]: Nov 01 01:57:18.113 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 01:57:18.126140 coreos-metadata[811]: Nov 01 01:57:18.126 INFO Fetch successful Nov 1 01:57:18.127888 coreos-metadata[811]: Nov 01 01:57:18.127 INFO wrote hostname srv-gnbw4.gb1.brightbox.com to /sysroot/etc/hostname Nov 1 01:57:18.131363 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 1 01:57:18.131680 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 1 01:57:18.143539 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:57:18.152015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:57:18.169502 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Nov 1 01:57:18.175498 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:57:18.175576 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:57:18.175618 kernel: BTRFS info (device vda6): using free space tree Nov 1 01:57:18.180484 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 01:57:18.185910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:57:18.212453 ignition[960]: INFO : Ignition 2.19.0 Nov 1 01:57:18.213174 ignition[960]: INFO : Stage: files Nov 1 01:57:18.213877 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:18.214424 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:18.216075 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:57:18.217864 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:57:18.218513 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:57:18.222413 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:57:18.223311 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:57:18.224360 unknown[960]: wrote ssh authorized keys file for user: core Nov 1 01:57:18.224995 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:57:18.225683 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:57:18.226312 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:57:18.226312 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:57:18.226312 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:57:18.441397 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 01:57:18.752535 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:57:18.752535 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:57:18.752535 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:57:18.752535 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:57:18.760669 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:57:19.092309 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 01:57:21.291908 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:57:21.291908 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 01:57:21.298460 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:57:21.300211 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:57:21.300992 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 01:57:21.300992 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 01:57:21.300992 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:57:21.300992 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:57:21.300992 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 01:57:21.300992 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:57:21.304473 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:57:21.304473 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:57:21.304473 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:57:21.304473 ignition[960]: INFO : files: files passed Nov 1 01:57:21.304473 ignition[960]: INFO : Ignition finished successfully Nov 1 01:57:21.305015 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:57:21.314963 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:57:21.321883 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:57:21.322695 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:57:21.322802 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:57:21.337143 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:57:21.337143 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:57:21.339486 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:57:21.340923 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:57:21.341554 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:57:21.346581 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:57:21.372433 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:57:21.373119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:57:21.374822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:57:21.375334 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:57:21.376281 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:57:21.377617 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:57:21.397508 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:57:21.404646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:57:21.418427 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:57:21.419189 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:57:21.421084 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:57:21.423310 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:57:21.423608 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:57:21.426255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:57:21.427675 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:57:21.429989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:57:21.432077 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:57:21.433748 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:57:21.435063 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:57:21.436338 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:57:21.437641 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:57:21.438849 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:57:21.440109 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:57:21.441240 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:57:21.441401 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:57:21.442845 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:57:21.443608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:57:21.444336 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:57:21.444456 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:57:21.445212 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:57:21.445338 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:57:21.446369 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:57:21.446492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:57:21.447388 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:57:21.447500 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:57:21.456691 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:57:21.459652 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:57:21.460101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:57:21.460246 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:57:21.461024 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:57:21.461149 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:57:21.466918 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:57:21.467017 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:57:21.479684 ignition[1013]: INFO : Ignition 2.19.0 Nov 1 01:57:21.480515 ignition[1013]: INFO : Stage: umount Nov 1 01:57:21.481426 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:57:21.481426 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:57:21.483714 ignition[1013]: INFO : umount: umount passed Nov 1 01:57:21.483714 ignition[1013]: INFO : Ignition finished successfully Nov 1 01:57:21.481527 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:57:21.484505 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:57:21.484605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:57:21.485871 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:57:21.485912 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:57:21.486488 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:57:21.486530 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:57:21.487233 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 01:57:21.487270 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 01:57:21.488699 systemd[1]: Stopped target network.target - Network. Nov 1 01:57:21.489370 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:57:21.489417 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:57:21.490172 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:57:21.490880 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:57:21.494478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:57:21.495462 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:57:21.495852 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:57:21.496609 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:57:21.496644 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:57:21.498055 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:57:21.498097 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:57:21.498812 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:57:21.498850 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:57:21.500380 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:57:21.500430 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:57:21.501306 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:57:21.503103 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:57:21.505560 systemd-networkd[772]: eth0: DHCPv6 lease lost Nov 1 01:57:21.507464 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:57:21.507575 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:57:21.509438 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:57:21.510778 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:57:21.512833 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:57:21.513320 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:57:21.517534 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:57:21.517937 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:57:21.517983 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:57:21.518436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:57:21.518485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:57:21.518896 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:57:21.518931 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:57:21.522551 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:57:21.522598 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:57:21.523623 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:57:21.534189 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:57:21.534315 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:57:21.537016 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:57:21.537163 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:57:21.538911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:57:21.538983 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:57:21.540125 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:57:21.540161 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:57:21.541892 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:57:21.541936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:57:21.544838 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:57:21.544880 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:57:21.547268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:57:21.547313 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:57:21.555894 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:57:21.556330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:57:21.557006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:57:21.557478 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 01:57:21.557526 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:57:21.560739 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:57:21.560795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:57:21.562721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:57:21.562769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:57:21.565015 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:57:21.565099 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:57:21.565713 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:57:21.565796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:57:21.567103 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:57:21.568087 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:57:21.568145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:57:21.576576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:57:21.584673 systemd[1]: Switching root. Nov 1 01:57:21.620689 systemd-journald[202]: Journal stopped Nov 1 01:57:22.755761 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Nov 1 01:57:22.755876 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:57:22.755902 kernel: SELinux: policy capability open_perms=1 Nov 1 01:57:22.755920 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:57:22.755939 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:57:22.755959 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:57:22.755981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:57:22.755994 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:57:22.756018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:57:22.756032 kernel: audit: type=1403 audit(1761962241.854:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:57:22.756048 systemd[1]: Successfully loaded SELinux policy in 41.117ms. Nov 1 01:57:22.756073 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.125ms. Nov 1 01:57:22.756088 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:57:22.756103 systemd[1]: Detected virtualization kvm. Nov 1 01:57:22.756118 systemd[1]: Detected architecture x86-64. Nov 1 01:57:22.756131 systemd[1]: Detected first boot. Nov 1 01:57:22.756149 systemd[1]: Hostname set to . Nov 1 01:57:22.756165 systemd[1]: Initializing machine ID from VM UUID. Nov 1 01:57:22.756179 zram_generator::config[1072]: No configuration found. Nov 1 01:57:22.756195 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:57:22.756209 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:57:22.756224 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 01:57:22.756245 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 01:57:22.756259 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 01:57:22.756284 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 01:57:22.756299 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 01:57:22.756312 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 01:57:22.756328 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 01:57:22.756343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 01:57:22.756357 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 01:57:22.756371 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:57:22.756386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:57:22.756400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 01:57:22.756418 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 01:57:22.756433 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 01:57:22.756992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:57:22.757015 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 01:57:22.757029 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:57:22.757044 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 01:57:22.757067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:57:22.757083 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:57:22.757098 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:57:22.757112 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:57:22.757126 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 01:57:22.757140 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 01:57:22.757340 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:57:22.757357 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:57:22.757371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:57:22.757385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:57:22.757399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:57:22.757413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 01:57:22.757427 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 01:57:22.757855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 01:57:22.757881 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 01:57:22.757896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:22.757918 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 01:57:22.757932 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 01:57:22.757946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 01:57:22.757962 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 01:57:22.757976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:57:22.757990 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:57:22.758004 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 01:57:22.758019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:57:22.758042 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:57:22.758056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:57:22.758070 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 01:57:22.758085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:57:22.758099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:57:22.758114 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 01:57:22.758130 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 01:57:22.758143 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:57:22.758158 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:57:22.758175 kernel: loop: module loaded Nov 1 01:57:22.758194 kernel: fuse: init (API version 7.39) Nov 1 01:57:22.758209 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 01:57:22.758223 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 01:57:22.758237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:57:22.758251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:22.758265 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 01:57:22.758280 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 01:57:22.758300 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 01:57:22.758315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 01:57:22.758328 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 01:57:22.758347 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 01:57:22.758361 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:57:22.758375 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:57:22.758390 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 01:57:22.758405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:57:22.758424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:57:22.758463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:57:22.758479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:57:22.758492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:57:22.758507 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 01:57:22.758526 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:57:22.758539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:57:22.758553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:57:22.758575 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 01:57:22.758589 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 01:57:22.758634 systemd-journald[1173]: Collecting audit messages is disabled. Nov 1 01:57:22.758675 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 01:57:22.758690 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:57:22.758705 systemd-journald[1173]: Journal started Nov 1 01:57:22.758734 systemd-journald[1173]: Runtime Journal (/run/log/journal/dfbb68991c564196a9ece6e1f316e46a) is 4.7M, max 38.0M, 33.2M free. Nov 1 01:57:22.764928 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 01:57:22.767459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:57:22.774468 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 01:57:22.777490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:57:22.786471 kernel: ACPI: bus type drm_connector registered Nov 1 01:57:22.786519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:57:22.801498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:57:22.808492 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:57:22.812820 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:57:22.814661 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:57:22.823623 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 01:57:22.824294 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 01:57:22.824840 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 01:57:22.826659 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 01:57:22.844825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 01:57:22.864371 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 01:57:22.865228 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 01:57:22.874791 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 01:57:22.885952 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:57:22.895777 systemd-journald[1173]: Time spent on flushing to /var/log/journal/dfbb68991c564196a9ece6e1f316e46a is 35.338ms for 1141 entries. Nov 1 01:57:22.895777 systemd-journald[1173]: System Journal (/var/log/journal/dfbb68991c564196a9ece6e1f316e46a) is 8.0M, max 584.8M, 576.8M free. Nov 1 01:57:22.939734 systemd-journald[1173]: Received client request to flush runtime journal. Nov 1 01:57:22.911959 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 1 01:57:22.911974 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 1 01:57:22.933378 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:57:22.943318 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 01:57:22.944237 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:57:22.945917 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 01:57:22.956595 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 01:57:22.979273 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 01:57:22.985761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:57:22.987040 udevadm[1247]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 01:57:23.005000 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Nov 1 01:57:23.005304 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Nov 1 01:57:23.010497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:57:23.561303 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 01:57:23.573678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:57:23.597231 systemd-udevd[1257]: Using default interface naming scheme 'v255'. Nov 1 01:57:23.616418 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:57:23.631682 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:57:23.653585 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 01:57:23.700393 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 1 01:57:23.711907 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 01:57:23.716490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1269) Nov 1 01:57:23.806637 systemd-networkd[1267]: lo: Link UP Nov 1 01:57:23.806645 systemd-networkd[1267]: lo: Gained carrier Nov 1 01:57:23.807920 systemd-networkd[1267]: Enumeration completed Nov 1 01:57:23.808065 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:57:23.808288 systemd-networkd[1267]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:57:23.808292 systemd-networkd[1267]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:57:23.817574 systemd-networkd[1267]: eth0: Link UP Nov 1 01:57:23.817582 systemd-networkd[1267]: eth0: Gained carrier Nov 1 01:57:23.817629 systemd-networkd[1267]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:57:23.818611 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 01:57:23.822778 systemd-networkd[1267]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:57:23.834671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 01:57:23.840490 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:57:23.844525 systemd-networkd[1267]: eth0: DHCPv4 address 10.244.90.154/30, gateway 10.244.90.153 acquired from 10.244.90.153 Nov 1 01:57:23.845464 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 01:57:23.848932 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 01:57:23.849761 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 01:57:23.849903 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:57:23.851462 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:57:23.893470 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 01:57:23.924725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:57:24.045839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:57:24.082916 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 01:57:24.091173 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 01:57:24.115470 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:57:24.145276 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 01:57:24.146964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:57:24.154616 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 01:57:24.160755 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:57:24.186272 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 01:57:24.189206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:57:24.193103 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:57:24.193188 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:57:24.193795 systemd[1]: Reached target machines.target - Containers. Nov 1 01:57:24.195724 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 01:57:24.202563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 01:57:24.204587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 01:57:24.205116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:57:24.207737 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 01:57:24.217811 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 01:57:24.223789 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 01:57:24.227392 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 01:57:24.242480 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:57:24.243182 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 01:57:24.246786 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 01:57:24.259384 kernel: loop0: detected capacity change from 0 to 142488 Nov 1 01:57:24.295473 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:57:24.317608 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 01:57:24.344556 kernel: loop2: detected capacity change from 0 to 8 Nov 1 01:57:24.374552 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 01:57:24.415280 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 01:57:24.438464 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 01:57:24.450525 kernel: loop6: detected capacity change from 0 to 8 Nov 1 01:57:24.453472 kernel: loop7: detected capacity change from 0 to 140768 Nov 1 01:57:24.465256 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 1 01:57:24.465726 (sd-merge)[1321]: Merged extensions into '/usr'. Nov 1 01:57:24.470938 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 01:57:24.470960 systemd[1]: Reloading... Nov 1 01:57:24.532521 zram_generator::config[1345]: No configuration found. Nov 1 01:57:24.677431 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:57:24.726639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:57:24.787891 systemd[1]: Reloading finished in 316 ms. Nov 1 01:57:24.807095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 01:57:24.812563 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 01:57:24.825643 systemd[1]: Starting ensure-sysext.service... Nov 1 01:57:24.830599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:57:24.834422 systemd[1]: Reloading requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Nov 1 01:57:24.834439 systemd[1]: Reloading... Nov 1 01:57:24.868248 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:57:24.868608 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 01:57:24.869902 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:57:24.870193 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 1 01:57:24.871019 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 1 01:57:24.874887 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:57:24.874977 systemd-tmpfiles[1413]: Skipping /boot Nov 1 01:57:24.886967 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:57:24.887100 systemd-tmpfiles[1413]: Skipping /boot Nov 1 01:57:24.912463 zram_generator::config[1444]: No configuration found. Nov 1 01:57:25.045970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:57:25.104382 systemd[1]: Reloading finished in 268 ms. Nov 1 01:57:25.136035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:57:25.151633 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:57:25.167606 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 01:57:25.171959 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 01:57:25.175748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:57:25.178071 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 01:57:25.196066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:25.196948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:57:25.207590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:57:25.218666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:57:25.228782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:57:25.229325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:57:25.229456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:25.231846 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:57:25.232001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:57:25.242048 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 01:57:25.257089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:57:25.257245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:57:25.259595 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:57:25.261817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:57:25.265763 augenrules[1533]: No rules Nov 1 01:57:25.268988 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:57:25.277242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:25.277427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:57:25.290724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:57:25.297733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:57:25.303581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:57:25.303994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:57:25.304094 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:57:25.304156 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:25.305240 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 01:57:25.307361 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 01:57:25.309869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:57:25.310029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:57:25.310860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:57:25.310997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:57:25.321527 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:57:25.321745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:57:25.328845 systemd[1]: Finished ensure-sysext.service. Nov 1 01:57:25.333677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:25.334036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:57:25.339683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:57:25.342585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:57:25.351632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:57:25.352140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:57:25.358621 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 01:57:25.364439 systemd-resolved[1509]: Positive Trust Anchors: Nov 1 01:57:25.364459 systemd-resolved[1509]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:57:25.365011 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 01:57:25.366485 systemd-resolved[1509]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:57:25.369580 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:57:25.369619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:57:25.370127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:57:25.370300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:57:25.373215 systemd-resolved[1509]: Using system hostname 'srv-gnbw4.gb1.brightbox.com'. Nov 1 01:57:25.374227 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:57:25.375658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:57:25.376302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:57:25.378038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:57:25.378422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:57:25.387652 systemd[1]: Reached target network.target - Network. Nov 1 01:57:25.388070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:57:25.388531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:57:25.388587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:57:25.404314 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 01:57:25.442661 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 01:57:25.443271 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:57:25.443778 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 01:57:25.444153 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 01:57:25.444574 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 01:57:25.444950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:57:25.444969 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:57:25.445247 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 01:57:25.445738 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 01:57:25.446160 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 01:57:25.446704 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:57:25.447639 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 01:57:25.449703 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 01:57:25.451923 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 01:57:25.456483 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 01:57:25.456913 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:57:25.457280 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:57:25.457826 systemd[1]: System is tainted: cgroupsv1 Nov 1 01:57:25.457857 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:57:25.457875 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:57:25.460524 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 01:57:25.462768 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 01:57:25.471580 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 01:57:25.474859 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 01:57:25.477572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 01:57:25.480508 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 01:57:25.484870 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 01:57:25.495678 jq[1583]: false Nov 1 01:57:25.501196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 01:57:25.502358 dbus-daemon[1581]: [system] SELinux support is enabled Nov 1 01:57:25.505205 dbus-daemon[1581]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1267 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 01:57:25.510589 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 01:57:25.522579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 01:57:25.527674 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 01:57:25.529236 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 01:57:25.533413 extend-filesystems[1584]: Found loop4 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found loop5 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found loop6 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found loop7 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda1 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda2 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda3 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found usr Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda4 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda6 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda7 Nov 1 01:57:25.534624 extend-filesystems[1584]: Found vda9 Nov 1 01:57:25.534624 extend-filesystems[1584]: Checking size of /dev/vda9 Nov 1 01:57:25.537092 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 01:57:25.551521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 01:57:25.561857 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 01:57:25.568925 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:57:25.569147 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 01:57:25.570637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:57:25.570837 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 01:57:25.591100 jq[1604]: true Nov 1 01:57:25.593542 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:57:25.596708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 01:57:25.597250 update_engine[1597]: I20251101 01:57:25.596635 1597 main.cc:92] Flatcar Update Engine starting Nov 1 01:57:25.605470 extend-filesystems[1584]: Resized partition /dev/vda9 Nov 1 01:57:25.606741 update_engine[1597]: I20251101 01:57:25.605268 1597 update_check_scheduler.cc:74] Next update check in 5m57s Nov 1 01:57:25.605005 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:57:25.611493 extend-filesystems[1621]: resize2fs 1.47.1 (20-May-2024) Nov 1 01:57:25.618475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1259) Nov 1 01:57:25.625459 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 1 01:57:25.622873 systemd[1]: Started update-engine.service - Update Engine. Nov 1 01:57:25.627942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:57:25.627981 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 01:57:25.629831 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 01:57:25.634619 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 01:57:25.635927 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:57:25.635951 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 01:57:25.639129 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:57:25.650847 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 01:57:25.659512 tar[1609]: linux-amd64/LICENSE Nov 1 01:57:25.659512 tar[1609]: linux-amd64/helm Nov 1 01:57:26.582333 jq[1619]: true Nov 1 01:57:26.582571 systemd-resolved[1509]: Clock change detected. Flushing caches. Nov 1 01:57:26.582741 systemd-timesyncd[1566]: Contacted time server 85.199.214.100:123 (0.flatcar.pool.ntp.org). Nov 1 01:57:26.582824 systemd-timesyncd[1566]: Initial clock synchronization to Sat 2025-11-01 01:57:26.582019 UTC. Nov 1 01:57:26.597431 systemd-logind[1594]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 01:57:26.597454 systemd-logind[1594]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 01:57:26.604176 systemd-logind[1594]: New seat seat0. Nov 1 01:57:26.619293 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 01:57:26.627971 systemd-networkd[1267]: eth0: Gained IPv6LL Nov 1 01:57:26.646479 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 01:57:26.652001 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 01:57:26.667669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:57:26.680425 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 01:57:26.797175 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 01:57:26.800670 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 01:57:26.806339 bash[1655]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:57:26.806423 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 01:57:26.814165 extend-filesystems[1621]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 01:57:26.814165 extend-filesystems[1621]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 01:57:26.814165 extend-filesystems[1621]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 01:57:26.830667 systemd[1]: Starting sshkeys.service... Nov 1 01:57:26.834354 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Nov 1 01:57:26.841570 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:57:26.841853 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 01:57:26.855828 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 01:57:26.855997 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 01:57:26.857060 dbus-daemon[1581]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1627 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 01:57:26.865771 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 01:57:26.881358 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 01:57:26.885459 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 01:57:26.896401 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:57:26.924419 polkitd[1671]: Started polkitd version 121 Nov 1 01:57:26.949158 sshd_keygen[1618]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:57:26.961460 polkitd[1671]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 01:57:26.961544 polkitd[1671]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 01:57:26.976148 polkitd[1671]: Finished loading, compiling and executing 2 rules Nov 1 01:57:26.981548 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 01:57:26.982883 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 01:57:26.984858 polkitd[1671]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 01:57:27.026096 systemd-hostnamed[1627]: Hostname set to (static) Nov 1 01:57:27.031843 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 01:57:27.042558 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 01:57:27.076103 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:57:27.078269 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 01:57:27.090644 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 01:57:27.112716 containerd[1626]: time="2025-11-01T01:57:27.112191694Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 01:57:27.123476 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 01:57:27.137158 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 01:57:27.146583 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 01:57:27.147832 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 01:57:27.173561 containerd[1626]: time="2025-11-01T01:57:27.173508147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.177106 containerd[1626]: time="2025-11-01T01:57:27.176855898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:57:27.177106 containerd[1626]: time="2025-11-01T01:57:27.176891958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:57:27.177106 containerd[1626]: time="2025-11-01T01:57:27.176908354Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:57:27.177237 containerd[1626]: time="2025-11-01T01:57:27.177200175Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 01:57:27.177237 containerd[1626]: time="2025-11-01T01:57:27.177223936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177295883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177317563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177588101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177663290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177679358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177689650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178147 containerd[1626]: time="2025-11-01T01:57:27.177765012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178533 containerd[1626]: time="2025-11-01T01:57:27.178511876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178708 containerd[1626]: time="2025-11-01T01:57:27.178688544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:57:27.178741 containerd[1626]: time="2025-11-01T01:57:27.178717739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:57:27.178816 containerd[1626]: time="2025-11-01T01:57:27.178802713Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:57:27.178861 containerd[1626]: time="2025-11-01T01:57:27.178849646Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:57:27.181514 containerd[1626]: time="2025-11-01T01:57:27.181490209Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:57:27.181572 containerd[1626]: time="2025-11-01T01:57:27.181554482Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:57:27.181614 containerd[1626]: time="2025-11-01T01:57:27.181572711Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 01:57:27.181614 containerd[1626]: time="2025-11-01T01:57:27.181588219Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 01:57:27.181664 containerd[1626]: time="2025-11-01T01:57:27.181632437Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:57:27.182466 containerd[1626]: time="2025-11-01T01:57:27.182243653Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:57:27.183843 containerd[1626]: time="2025-11-01T01:57:27.183820978Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:57:27.184742 containerd[1626]: time="2025-11-01T01:57:27.184724413Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 01:57:27.184776 containerd[1626]: time="2025-11-01T01:57:27.184748293Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 01:57:27.184776 containerd[1626]: time="2025-11-01T01:57:27.184769358Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 01:57:27.184833 containerd[1626]: time="2025-11-01T01:57:27.184786680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184833 containerd[1626]: time="2025-11-01T01:57:27.184802599Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184878 containerd[1626]: time="2025-11-01T01:57:27.184822893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184878 containerd[1626]: time="2025-11-01T01:57:27.184853114Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184878 containerd[1626]: time="2025-11-01T01:57:27.184868876Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184946 containerd[1626]: time="2025-11-01T01:57:27.184882643Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184946 containerd[1626]: time="2025-11-01T01:57:27.184895209Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184946 containerd[1626]: time="2025-11-01T01:57:27.184909086Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:57:27.184946 containerd[1626]: time="2025-11-01T01:57:27.184931262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185047 containerd[1626]: time="2025-11-01T01:57:27.184946275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185047 containerd[1626]: time="2025-11-01T01:57:27.184959637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185047 containerd[1626]: time="2025-11-01T01:57:27.184980842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185047 containerd[1626]: time="2025-11-01T01:57:27.184995639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185047 containerd[1626]: time="2025-11-01T01:57:27.185019346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185047 containerd[1626]: time="2025-11-01T01:57:27.185031870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185047893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185093034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185108502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185120999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185132360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185160510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185208 containerd[1626]: time="2025-11-01T01:57:27.185183958Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185209497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185221815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185232177Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185281460Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185301611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185313247Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185325195Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185336877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185350381Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 01:57:27.185366 containerd[1626]: time="2025-11-01T01:57:27.185365092Z" level=info msg="NRI interface is disabled by configuration." Nov 1 01:57:27.185595 containerd[1626]: time="2025-11-01T01:57:27.185378429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:57:27.186508 containerd[1626]: time="2025-11-01T01:57:27.185671653Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:57:27.186508 containerd[1626]: time="2025-11-01T01:57:27.185734504Z" level=info msg="Connect containerd service" Nov 1 01:57:27.186508 containerd[1626]: time="2025-11-01T01:57:27.185783125Z" level=info msg="using legacy CRI server" Nov 1 01:57:27.186508 containerd[1626]: time="2025-11-01T01:57:27.185791374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 01:57:27.186508 containerd[1626]: time="2025-11-01T01:57:27.185907431Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:57:27.186508 containerd[1626]: time="2025-11-01T01:57:27.186496013Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:57:27.189269 containerd[1626]: time="2025-11-01T01:57:27.188691297Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:57:27.189474 containerd[1626]: time="2025-11-01T01:57:27.189457627Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:57:27.189631 containerd[1626]: time="2025-11-01T01:57:27.189596582Z" level=info msg="Start subscribing containerd event" Nov 1 01:57:27.189659 containerd[1626]: time="2025-11-01T01:57:27.189649794Z" level=info msg="Start recovering state" Nov 1 01:57:27.189744 containerd[1626]: time="2025-11-01T01:57:27.189730950Z" level=info msg="Start event monitor" Nov 1 01:57:27.189776 containerd[1626]: time="2025-11-01T01:57:27.189756523Z" level=info msg="Start snapshots syncer" Nov 1 01:57:27.189776 containerd[1626]: time="2025-11-01T01:57:27.189766777Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:57:27.189826 containerd[1626]: time="2025-11-01T01:57:27.189776750Z" level=info msg="Start streaming server" Nov 1 01:57:27.190472 containerd[1626]: time="2025-11-01T01:57:27.189853154Z" level=info msg="containerd successfully booted in 0.080114s" Nov 1 01:57:27.189968 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 01:57:27.569833 tar[1609]: linux-amd64/README.md Nov 1 01:57:27.588634 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 01:57:27.906022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:57:27.927688 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:57:28.142330 systemd-networkd[1267]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:16a6:24:19ff:fef4:5a9a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:16a6:24:19ff:fef4:5a9a/64 assigned by NDisc. Nov 1 01:57:28.142352 systemd-networkd[1267]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 01:57:28.535684 kubelet[1724]: E1101 01:57:28.535571 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:57:28.539571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:57:28.541180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:57:32.224467 login[1705]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:57:32.224620 login[1706]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:57:32.248681 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 01:57:32.255565 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 01:57:32.259705 systemd-logind[1594]: New session 1 of user core. Nov 1 01:57:32.265762 systemd-logind[1594]: New session 2 of user core. Nov 1 01:57:32.276390 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 01:57:32.283630 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 01:57:32.290910 (systemd)[1744]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:57:32.397238 systemd[1744]: Queued start job for default target default.target. Nov 1 01:57:32.398365 systemd[1744]: Created slice app.slice - User Application Slice. Nov 1 01:57:32.398399 systemd[1744]: Reached target paths.target - Paths. Nov 1 01:57:32.398414 systemd[1744]: Reached target timers.target - Timers. Nov 1 01:57:32.406266 systemd[1744]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 01:57:32.423958 systemd[1744]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 01:57:32.424092 systemd[1744]: Reached target sockets.target - Sockets. Nov 1 01:57:32.424132 systemd[1744]: Reached target basic.target - Basic System. Nov 1 01:57:32.426060 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 01:57:32.427270 systemd[1744]: Reached target default.target - Main User Target. Nov 1 01:57:32.427319 systemd[1744]: Startup finished in 129ms. Nov 1 01:57:32.438539 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 01:57:32.439393 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 01:57:33.505765 coreos-metadata[1580]: Nov 01 01:57:33.505 WARN failed to locate config-drive, using the metadata service API instead Nov 1 01:57:33.533517 coreos-metadata[1580]: Nov 01 01:57:33.533 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 1 01:57:33.540845 coreos-metadata[1580]: Nov 01 01:57:33.540 INFO Fetch failed with 404: resource not found Nov 1 01:57:33.541040 coreos-metadata[1580]: Nov 01 01:57:33.541 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 01:57:33.541692 coreos-metadata[1580]: Nov 01 01:57:33.541 INFO Fetch successful Nov 1 01:57:33.541842 coreos-metadata[1580]: Nov 01 01:57:33.541 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 1 01:57:33.558372 coreos-metadata[1580]: Nov 01 01:57:33.558 INFO Fetch successful Nov 1 01:57:33.558706 coreos-metadata[1580]: Nov 01 01:57:33.558 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 1 01:57:33.571402 coreos-metadata[1580]: Nov 01 01:57:33.571 INFO Fetch successful Nov 1 01:57:33.571699 coreos-metadata[1580]: Nov 01 01:57:33.571 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 1 01:57:33.585514 coreos-metadata[1580]: Nov 01 01:57:33.585 INFO Fetch successful Nov 1 01:57:33.585845 coreos-metadata[1580]: Nov 01 01:57:33.585 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 1 01:57:33.603485 coreos-metadata[1580]: Nov 01 01:57:33.603 INFO Fetch successful Nov 1 01:57:33.659222 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 01:57:33.661344 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 01:57:34.043858 coreos-metadata[1675]: Nov 01 01:57:34.043 WARN failed to locate config-drive, using the metadata service API instead Nov 1 01:57:34.061485 coreos-metadata[1675]: Nov 01 01:57:34.061 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 1 01:57:34.087737 coreos-metadata[1675]: Nov 01 01:57:34.087 INFO Fetch successful Nov 1 01:57:34.087923 coreos-metadata[1675]: Nov 01 01:57:34.087 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 01:57:34.114256 coreos-metadata[1675]: Nov 01 01:57:34.114 INFO Fetch successful Nov 1 01:57:34.116478 unknown[1675]: wrote ssh authorized keys file for user: core Nov 1 01:57:34.138369 update-ssh-keys[1791]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:57:34.139071 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 01:57:34.145873 systemd[1]: Finished sshkeys.service. Nov 1 01:57:34.153507 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 01:57:34.153835 systemd[1]: Startup finished in 15.328s (kernel) + 11.426s (userspace) = 26.755s. Nov 1 01:57:36.470463 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 01:57:36.481595 systemd[1]: Started sshd@0-10.244.90.154:22-147.75.109.163:54792.service - OpenSSH per-connection server daemon (147.75.109.163:54792). Nov 1 01:57:37.382750 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 54792 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:37.386388 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:37.398400 systemd-logind[1594]: New session 3 of user core. Nov 1 01:57:37.409328 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 01:57:38.150752 systemd[1]: Started sshd@1-10.244.90.154:22-147.75.109.163:54804.service - OpenSSH per-connection server daemon (147.75.109.163:54804). Nov 1 01:57:38.623719 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:57:38.639497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:57:38.832312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:57:38.841554 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:57:38.911949 kubelet[1816]: E1101 01:57:38.911765 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:57:38.919319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:57:38.919832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:57:39.059814 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 54804 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:39.063914 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:39.076590 systemd-logind[1594]: New session 4 of user core. Nov 1 01:57:39.082733 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 01:57:39.690611 sshd[1802]: pam_unix(sshd:session): session closed for user core Nov 1 01:57:39.700336 systemd[1]: sshd@1-10.244.90.154:22-147.75.109.163:54804.service: Deactivated successfully. Nov 1 01:57:39.704225 systemd-logind[1594]: Session 4 logged out. Waiting for processes to exit. Nov 1 01:57:39.704611 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 01:57:39.706186 systemd-logind[1594]: Removed session 4. Nov 1 01:57:39.843666 systemd[1]: Started sshd@2-10.244.90.154:22-147.75.109.163:54810.service - OpenSSH per-connection server daemon (147.75.109.163:54810). Nov 1 01:57:40.748199 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 54810 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:40.752004 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:40.760902 systemd-logind[1594]: New session 5 of user core. Nov 1 01:57:40.773448 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 01:57:41.370656 sshd[1831]: pam_unix(sshd:session): session closed for user core Nov 1 01:57:41.379031 systemd[1]: sshd@2-10.244.90.154:22-147.75.109.163:54810.service: Deactivated successfully. Nov 1 01:57:41.385957 systemd-logind[1594]: Session 5 logged out. Waiting for processes to exit. Nov 1 01:57:41.386511 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 01:57:41.388214 systemd-logind[1594]: Removed session 5. Nov 1 01:57:41.529966 systemd[1]: Started sshd@3-10.244.90.154:22-147.75.109.163:33930.service - OpenSSH per-connection server daemon (147.75.109.163:33930). Nov 1 01:57:42.431632 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 33930 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:42.435580 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:42.446579 systemd-logind[1594]: New session 6 of user core. Nov 1 01:57:42.454059 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 01:57:43.061752 sshd[1839]: pam_unix(sshd:session): session closed for user core Nov 1 01:57:43.068448 systemd[1]: sshd@3-10.244.90.154:22-147.75.109.163:33930.service: Deactivated successfully. Nov 1 01:57:43.074677 systemd-logind[1594]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:57:43.075508 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:57:43.076795 systemd-logind[1594]: Removed session 6. Nov 1 01:57:43.223628 systemd[1]: Started sshd@4-10.244.90.154:22-147.75.109.163:33938.service - OpenSSH per-connection server daemon (147.75.109.163:33938). Nov 1 01:57:44.132581 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 33938 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:44.136326 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:44.145692 systemd-logind[1594]: New session 7 of user core. Nov 1 01:57:44.159765 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 01:57:44.639579 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:57:44.639874 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:57:44.661029 sudo[1851]: pam_unix(sudo:session): session closed for user root Nov 1 01:57:44.809061 sshd[1847]: pam_unix(sshd:session): session closed for user core Nov 1 01:57:44.816494 systemd-logind[1594]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:57:44.818550 systemd[1]: sshd@4-10.244.90.154:22-147.75.109.163:33938.service: Deactivated successfully. Nov 1 01:57:44.824080 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:57:44.826454 systemd-logind[1594]: Removed session 7. Nov 1 01:57:44.970760 systemd[1]: Started sshd@5-10.244.90.154:22-147.75.109.163:33950.service - OpenSSH per-connection server daemon (147.75.109.163:33950). Nov 1 01:57:45.877779 sshd[1856]: Accepted publickey for core from 147.75.109.163 port 33950 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:45.881433 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:45.893571 systemd-logind[1594]: New session 8 of user core. Nov 1 01:57:45.908546 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 01:57:46.369863 sudo[1861]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:57:46.370691 sudo[1861]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:57:46.375939 sudo[1861]: pam_unix(sudo:session): session closed for user root Nov 1 01:57:46.384672 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:57:46.385013 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:57:46.400448 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 01:57:46.402468 auditctl[1864]: No rules Nov 1 01:57:46.402854 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:57:46.403087 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 01:57:46.409715 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:57:46.434748 augenrules[1883]: No rules Nov 1 01:57:46.436027 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:57:46.438448 sudo[1860]: pam_unix(sudo:session): session closed for user root Nov 1 01:57:46.584552 sshd[1856]: pam_unix(sshd:session): session closed for user core Nov 1 01:57:46.593991 systemd[1]: sshd@5-10.244.90.154:22-147.75.109.163:33950.service: Deactivated successfully. Nov 1 01:57:46.596946 systemd-logind[1594]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:57:46.597865 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:57:46.599011 systemd-logind[1594]: Removed session 8. Nov 1 01:57:46.746612 systemd[1]: Started sshd@6-10.244.90.154:22-147.75.109.163:33958.service - OpenSSH per-connection server daemon (147.75.109.163:33958). Nov 1 01:57:47.648394 sshd[1892]: Accepted publickey for core from 147.75.109.163 port 33958 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:57:47.652951 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:57:47.662074 systemd-logind[1594]: New session 9 of user core. Nov 1 01:57:47.673807 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 01:57:48.136127 sudo[1896]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:57:48.136538 sudo[1896]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:57:48.582701 (dockerd)[1912]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 01:57:48.582782 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 01:57:48.963298 dockerd[1912]: time="2025-11-01T01:57:48.963215894Z" level=info msg="Starting up" Nov 1 01:57:48.965460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:57:48.972487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:57:49.121300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:57:49.125847 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:57:49.170118 systemd[1]: var-lib-docker-metacopy\x2dcheck3530899533-merged.mount: Deactivated successfully. Nov 1 01:57:49.187185 kubelet[1946]: E1101 01:57:49.186834 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:57:49.191332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:57:49.191511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:57:49.197985 dockerd[1912]: time="2025-11-01T01:57:49.197954894Z" level=info msg="Loading containers: start." Nov 1 01:57:49.322665 kernel: Initializing XFRM netlink socket Nov 1 01:57:49.416326 systemd-networkd[1267]: docker0: Link UP Nov 1 01:57:49.438470 dockerd[1912]: time="2025-11-01T01:57:49.438437214Z" level=info msg="Loading containers: done." Nov 1 01:57:49.457691 dockerd[1912]: time="2025-11-01T01:57:49.457209789Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:57:49.457691 dockerd[1912]: time="2025-11-01T01:57:49.457346387Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 01:57:49.457691 dockerd[1912]: time="2025-11-01T01:57:49.457456523Z" level=info msg="Daemon has completed initialization" Nov 1 01:57:49.480984 dockerd[1912]: time="2025-11-01T01:57:49.480915371Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:57:49.481306 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 01:57:50.645675 containerd[1626]: time="2025-11-01T01:57:50.645185082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 01:57:51.674979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980612564.mount: Deactivated successfully. Nov 1 01:57:53.213485 containerd[1626]: time="2025-11-01T01:57:53.212228677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:53.213485 containerd[1626]: time="2025-11-01T01:57:53.213431938Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Nov 1 01:57:53.214223 containerd[1626]: time="2025-11-01T01:57:53.214195597Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:53.220573 containerd[1626]: time="2025-11-01T01:57:53.220549008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:53.222464 containerd[1626]: time="2025-11-01T01:57:53.222411621Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.577142759s" Nov 1 01:57:53.222539 containerd[1626]: time="2025-11-01T01:57:53.222477687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 01:57:53.223100 containerd[1626]: time="2025-11-01T01:57:53.223073332Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 01:57:55.067450 containerd[1626]: time="2025-11-01T01:57:55.067381074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:55.068740 containerd[1626]: time="2025-11-01T01:57:55.068495048Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Nov 1 01:57:55.069625 containerd[1626]: time="2025-11-01T01:57:55.069204772Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:55.071897 containerd[1626]: time="2025-11-01T01:57:55.071866123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:55.073568 containerd[1626]: time="2025-11-01T01:57:55.073538646Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.850431495s" Nov 1 01:57:55.073686 containerd[1626]: time="2025-11-01T01:57:55.073669849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 01:57:55.074345 containerd[1626]: time="2025-11-01T01:57:55.074191754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 01:57:56.489215 containerd[1626]: time="2025-11-01T01:57:56.489022382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:56.490379 containerd[1626]: time="2025-11-01T01:57:56.490173161Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Nov 1 01:57:56.496214 containerd[1626]: time="2025-11-01T01:57:56.494625714Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:56.497951 containerd[1626]: time="2025-11-01T01:57:56.497882720Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.423439368s" Nov 1 01:57:56.498252 containerd[1626]: time="2025-11-01T01:57:56.498131267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 01:57:56.499824 containerd[1626]: time="2025-11-01T01:57:56.499330776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:56.500119 containerd[1626]: time="2025-11-01T01:57:56.500074935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 01:57:58.160401 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 01:57:58.359526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452072084.mount: Deactivated successfully. Nov 1 01:57:58.857585 containerd[1626]: time="2025-11-01T01:57:58.857528458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:58.858808 containerd[1626]: time="2025-11-01T01:57:58.858760983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Nov 1 01:57:58.859331 containerd[1626]: time="2025-11-01T01:57:58.859297945Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:58.863573 containerd[1626]: time="2025-11-01T01:57:58.863532694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:57:58.864871 containerd[1626]: time="2025-11-01T01:57:58.864838314Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.364703926s" Nov 1 01:57:58.864927 containerd[1626]: time="2025-11-01T01:57:58.864876597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 01:57:58.865489 containerd[1626]: time="2025-11-01T01:57:58.865460895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 01:57:59.373754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 01:57:59.387533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:57:59.567300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:57:59.573908 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:57:59.584627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348819379.mount: Deactivated successfully. Nov 1 01:57:59.644195 kubelet[2170]: E1101 01:57:59.644004 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:57:59.648344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:57:59.648558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:58:00.653791 containerd[1626]: time="2025-11-01T01:58:00.653699815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:00.655507 containerd[1626]: time="2025-11-01T01:58:00.654856443Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 1 01:58:00.658179 containerd[1626]: time="2025-11-01T01:58:00.658012555Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:00.662839 containerd[1626]: time="2025-11-01T01:58:00.662797695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:00.666214 containerd[1626]: time="2025-11-01T01:58:00.665783893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.800256522s" Nov 1 01:58:00.666214 containerd[1626]: time="2025-11-01T01:58:00.665865902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 01:58:00.666854 containerd[1626]: time="2025-11-01T01:58:00.666816698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:58:01.499671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658564502.mount: Deactivated successfully. Nov 1 01:58:01.506439 containerd[1626]: time="2025-11-01T01:58:01.506393060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:01.507224 containerd[1626]: time="2025-11-01T01:58:01.507181608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 1 01:58:01.508162 containerd[1626]: time="2025-11-01T01:58:01.507769584Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:01.511169 containerd[1626]: time="2025-11-01T01:58:01.510354433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:01.511874 containerd[1626]: time="2025-11-01T01:58:01.511850526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 844.977897ms" Nov 1 01:58:01.511995 containerd[1626]: time="2025-11-01T01:58:01.511979987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:58:01.512648 containerd[1626]: time="2025-11-01T01:58:01.512630483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 01:58:02.393115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689697137.mount: Deactivated successfully. Nov 1 01:58:09.873158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 01:58:09.881037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:58:10.086517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:58:10.101866 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:58:10.180686 kubelet[2298]: E1101 01:58:10.179354 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:58:10.184477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:58:10.184670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:58:11.029316 containerd[1626]: time="2025-11-01T01:58:11.029205317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:11.031056 containerd[1626]: time="2025-11-01T01:58:11.030706824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 1 01:58:11.031509 containerd[1626]: time="2025-11-01T01:58:11.031483497Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:11.034720 containerd[1626]: time="2025-11-01T01:58:11.034681445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:11.036941 containerd[1626]: time="2025-11-01T01:58:11.036915554Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.524182243s" Nov 1 01:58:11.037048 containerd[1626]: time="2025-11-01T01:58:11.037034901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 01:58:11.412782 update_engine[1597]: I20251101 01:58:11.411505 1597 update_attempter.cc:509] Updating boot flags... Nov 1 01:58:11.470400 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2322) Nov 1 01:58:11.544657 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2325) Nov 1 01:58:13.841736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:58:13.860468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:58:13.887459 systemd[1]: Reloading requested from client PID 2351 ('systemctl') (unit session-9.scope)... Nov 1 01:58:13.887481 systemd[1]: Reloading... Nov 1 01:58:14.023488 zram_generator::config[2390]: No configuration found. Nov 1 01:58:14.196031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:58:14.275798 systemd[1]: Reloading finished in 387 ms. Nov 1 01:58:14.331210 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:58:14.331554 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:58:14.332135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:58:14.337439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:58:14.471569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:58:14.476408 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:58:14.543207 kubelet[2469]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:58:14.543734 kubelet[2469]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:58:14.543785 kubelet[2469]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:58:14.544034 kubelet[2469]: I1101 01:58:14.543993 2469 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:58:14.850721 kubelet[2469]: I1101 01:58:14.850511 2469 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:58:14.850721 kubelet[2469]: I1101 01:58:14.850566 2469 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:58:14.851690 kubelet[2469]: I1101 01:58:14.851609 2469 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:58:14.902787 kubelet[2469]: E1101 01:58:14.902740 2469 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.90.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:14.903745 kubelet[2469]: I1101 01:58:14.903395 2469 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:58:14.914816 kubelet[2469]: E1101 01:58:14.913840 2469 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:58:14.914816 kubelet[2469]: I1101 01:58:14.913873 2469 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:58:14.918497 kubelet[2469]: I1101 01:58:14.918478 2469 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:58:14.920647 kubelet[2469]: I1101 01:58:14.920603 2469 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:58:14.920932 kubelet[2469]: I1101 01:58:14.920730 2469 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gnbw4.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:58:14.922838 kubelet[2469]: I1101 01:58:14.922820 2469 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:58:14.922921 kubelet[2469]: I1101 01:58:14.922913 2469 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:58:14.924231 kubelet[2469]: I1101 01:58:14.924216 2469 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:58:14.928990 kubelet[2469]: I1101 01:58:14.928974 2469 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:58:14.929097 kubelet[2469]: I1101 01:58:14.929088 2469 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:58:14.929182 kubelet[2469]: I1101 01:58:14.929175 2469 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:58:14.929254 kubelet[2469]: I1101 01:58:14.929246 2469 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:58:14.934964 kubelet[2469]: W1101 01:58:14.934876 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.90.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gnbw4.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:14.935068 kubelet[2469]: E1101 01:58:14.935028 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.90.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gnbw4.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:14.936812 kubelet[2469]: W1101 01:58:14.936725 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.90.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:14.936876 kubelet[2469]: E1101 01:58:14.936844 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.90.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:14.940386 kubelet[2469]: I1101 01:58:14.940116 2469 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:58:14.946176 kubelet[2469]: I1101 01:58:14.945955 2469 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:58:14.947372 kubelet[2469]: W1101 01:58:14.947165 2469 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:58:14.950561 kubelet[2469]: I1101 01:58:14.950370 2469 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:58:14.950561 kubelet[2469]: I1101 01:58:14.950408 2469 server.go:1287] "Started kubelet" Nov 1 01:58:14.958726 kubelet[2469]: I1101 01:58:14.958118 2469 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:58:14.958726 kubelet[2469]: I1101 01:58:14.958486 2469 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:58:14.961124 kubelet[2469]: I1101 01:58:14.958862 2469 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:58:14.961124 kubelet[2469]: I1101 01:58:14.959370 2469 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:58:14.961124 kubelet[2469]: I1101 01:58:14.960417 2469 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:58:14.969608 kubelet[2469]: I1101 01:58:14.969577 2469 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:58:14.969749 kubelet[2469]: I1101 01:58:14.969737 2469 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:58:14.970169 kubelet[2469]: E1101 01:58:14.970018 2469 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" Nov 1 01:58:14.973592 kubelet[2469]: E1101 01:58:14.973429 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gnbw4.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.154:6443: connect: connection refused" interval="200ms" Nov 1 01:58:14.973954 kubelet[2469]: I1101 01:58:14.973940 2469 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:58:14.974167 kubelet[2469]: I1101 01:58:14.974064 2469 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:58:14.977782 kubelet[2469]: E1101 01:58:14.973556 2469 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.90.154:6443/api/v1/namespaces/default/events\": dial tcp 10.244.90.154:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gnbw4.gb1.brightbox.com.1873bf5f3a266192 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gnbw4.gb1.brightbox.com,UID:srv-gnbw4.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gnbw4.gb1.brightbox.com,},FirstTimestamp:2025-11-01 01:58:14.950388114 +0000 UTC m=+0.467647752,LastTimestamp:2025-11-01 01:58:14.950388114 +0000 UTC m=+0.467647752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gnbw4.gb1.brightbox.com,}" Nov 1 01:58:14.984635 kubelet[2469]: I1101 01:58:14.984519 2469 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:58:14.986171 kubelet[2469]: W1101 01:58:14.985438 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.90.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:14.986171 kubelet[2469]: E1101 01:58:14.985489 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.90.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:14.986527 kubelet[2469]: I1101 01:58:14.986505 2469 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:58:14.986610 kubelet[2469]: I1101 01:58:14.986603 2469 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:58:14.986696 kubelet[2469]: I1101 01:58:14.986686 2469 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:58:14.986740 kubelet[2469]: I1101 01:58:14.986733 2469 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:58:14.986831 kubelet[2469]: E1101 01:58:14.986818 2469 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:58:14.990847 kubelet[2469]: W1101 01:58:14.990815 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.90.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:14.990961 kubelet[2469]: E1101 01:58:14.990946 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.90.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:14.991216 kubelet[2469]: I1101 01:58:14.991203 2469 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:58:14.991294 kubelet[2469]: I1101 01:58:14.991286 2469 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:58:14.991435 kubelet[2469]: I1101 01:58:14.991420 2469 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:58:15.002854 kubelet[2469]: E1101 01:58:15.002796 2469 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:58:15.028883 kubelet[2469]: I1101 01:58:15.028831 2469 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:58:15.028883 kubelet[2469]: I1101 01:58:15.028863 2469 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:58:15.029396 kubelet[2469]: I1101 01:58:15.028904 2469 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:58:15.030656 kubelet[2469]: I1101 01:58:15.030611 2469 policy_none.go:49] "None policy: Start" Nov 1 01:58:15.030794 kubelet[2469]: I1101 01:58:15.030663 2469 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:58:15.030794 kubelet[2469]: I1101 01:58:15.030691 2469 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:58:15.037563 kubelet[2469]: I1101 01:58:15.037504 2469 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:58:15.037842 kubelet[2469]: I1101 01:58:15.037804 2469 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:58:15.037929 kubelet[2469]: I1101 01:58:15.037842 2469 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:58:15.040525 kubelet[2469]: I1101 01:58:15.040476 2469 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:58:15.045471 kubelet[2469]: E1101 01:58:15.045443 2469 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:58:15.046338 kubelet[2469]: E1101 01:58:15.046252 2469 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gnbw4.gb1.brightbox.com\" not found" Nov 1 01:58:15.109572 kubelet[2469]: E1101 01:58:15.106605 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.115029 kubelet[2469]: E1101 01:58:15.114998 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.121266 kubelet[2469]: E1101 01:58:15.121239 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.140757 kubelet[2469]: I1101 01:58:15.140737 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.141202 kubelet[2469]: E1101 01:58:15.141179 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.90.154:6443/api/v1/nodes\": dial tcp 10.244.90.154:6443: connect: connection refused" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.174696 kubelet[2469]: E1101 01:58:15.174589 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gnbw4.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.154:6443: connect: connection refused" interval="400ms" Nov 1 01:58:15.275383 kubelet[2469]: I1101 01:58:15.275264 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-ca-certs\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.275383 kubelet[2469]: I1101 01:58:15.275394 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-k8s-certs\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.275804 kubelet[2469]: I1101 01:58:15.275444 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/972dc68a19738e9548edc6e91f62057b-ca-certs\") pod \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" (UID: \"972dc68a19738e9548edc6e91f62057b\") " pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.275804 kubelet[2469]: I1101 01:58:15.275492 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/972dc68a19738e9548edc6e91f62057b-k8s-certs\") pod \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" (UID: \"972dc68a19738e9548edc6e91f62057b\") " pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.275804 kubelet[2469]: I1101 01:58:15.275539 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/972dc68a19738e9548edc6e91f62057b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" (UID: \"972dc68a19738e9548edc6e91f62057b\") " pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.275804 kubelet[2469]: I1101 01:58:15.275584 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8782202c501166a4060e598dd8655bac-kubeconfig\") pod \"kube-scheduler-srv-gnbw4.gb1.brightbox.com\" (UID: \"8782202c501166a4060e598dd8655bac\") " pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.275804 kubelet[2469]: I1101 01:58:15.275633 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-flexvolume-dir\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.276283 kubelet[2469]: I1101 01:58:15.275674 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-kubeconfig\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.276283 kubelet[2469]: I1101 01:58:15.275764 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.346210 kubelet[2469]: I1101 01:58:15.345661 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.346643 kubelet[2469]: E1101 01:58:15.346592 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.90.154:6443/api/v1/nodes\": dial tcp 10.244.90.154:6443: connect: connection refused" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.410369 containerd[1626]: time="2025-11-01T01:58:15.410093084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gnbw4.gb1.brightbox.com,Uid:972dc68a19738e9548edc6e91f62057b,Namespace:kube-system,Attempt:0,}" Nov 1 01:58:15.423691 containerd[1626]: time="2025-11-01T01:58:15.423313436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gnbw4.gb1.brightbox.com,Uid:8782202c501166a4060e598dd8655bac,Namespace:kube-system,Attempt:0,}" Nov 1 01:58:15.423691 containerd[1626]: time="2025-11-01T01:58:15.423448783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gnbw4.gb1.brightbox.com,Uid:3ecf6633b58da7ed8d6a22fba171819a,Namespace:kube-system,Attempt:0,}" Nov 1 01:58:15.575692 kubelet[2469]: E1101 01:58:15.575629 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gnbw4.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.154:6443: connect: connection refused" interval="800ms" Nov 1 01:58:15.747966 kubelet[2469]: W1101 01:58:15.747764 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.90.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:15.747966 kubelet[2469]: E1101 01:58:15.747852 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.90.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:15.750743 kubelet[2469]: I1101 01:58:15.750342 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.750743 kubelet[2469]: E1101 01:58:15.750684 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.90.154:6443/api/v1/nodes\": dial tcp 10.244.90.154:6443: connect: connection refused" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:15.974227 kubelet[2469]: W1101 01:58:15.974068 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.90.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:15.974685 kubelet[2469]: E1101 01:58:15.974631 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.90.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:15.982018 kubelet[2469]: W1101 01:58:15.981754 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.90.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gnbw4.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:15.982018 kubelet[2469]: E1101 01:58:15.981962 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.90.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gnbw4.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:16.112772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633380187.mount: Deactivated successfully. Nov 1 01:58:16.117191 containerd[1626]: time="2025-11-01T01:58:16.116352049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:58:16.117599 containerd[1626]: time="2025-11-01T01:58:16.117486547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 1 01:58:16.118944 containerd[1626]: time="2025-11-01T01:58:16.118786636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:58:16.120699 containerd[1626]: time="2025-11-01T01:58:16.120649548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:58:16.121989 containerd[1626]: time="2025-11-01T01:58:16.121159871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:58:16.121989 containerd[1626]: time="2025-11-01T01:58:16.121227801Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:58:16.124472 containerd[1626]: time="2025-11-01T01:58:16.124403816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:58:16.127129 containerd[1626]: time="2025-11-01T01:58:16.127062580Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.488487ms" Nov 1 01:58:16.127428 containerd[1626]: time="2025-11-01T01:58:16.127392497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:58:16.131161 containerd[1626]: time="2025-11-01T01:58:16.128706863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.235031ms" Nov 1 01:58:16.133205 containerd[1626]: time="2025-11-01T01:58:16.133165790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 709.713693ms" Nov 1 01:58:16.286231 containerd[1626]: time="2025-11-01T01:58:16.285941143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:16.286739 containerd[1626]: time="2025-11-01T01:58:16.286032929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:16.286739 containerd[1626]: time="2025-11-01T01:58:16.286051941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:16.286739 containerd[1626]: time="2025-11-01T01:58:16.286188014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:16.287568 containerd[1626]: time="2025-11-01T01:58:16.287493643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:16.288722 containerd[1626]: time="2025-11-01T01:58:16.288691857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:16.289103 containerd[1626]: time="2025-11-01T01:58:16.288826759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:16.290335 containerd[1626]: time="2025-11-01T01:58:16.290212627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:16.290749 containerd[1626]: time="2025-11-01T01:58:16.290679135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:16.292802 containerd[1626]: time="2025-11-01T01:58:16.290950688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:16.292802 containerd[1626]: time="2025-11-01T01:58:16.292609168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:16.292802 containerd[1626]: time="2025-11-01T01:58:16.292711166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:16.372755 kubelet[2469]: W1101 01:58:16.372686 2469 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.90.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.90.154:6443: connect: connection refused Nov 1 01:58:16.373080 kubelet[2469]: E1101 01:58:16.372776 2469 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.90.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.90.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:58:16.376865 kubelet[2469]: E1101 01:58:16.376836 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.90.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gnbw4.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.90.154:6443: connect: connection refused" interval="1.6s" Nov 1 01:58:16.402738 containerd[1626]: time="2025-11-01T01:58:16.402187539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gnbw4.gb1.brightbox.com,Uid:3ecf6633b58da7ed8d6a22fba171819a,Namespace:kube-system,Attempt:0,} returns sandbox id \"63911b0ebb474bd338fb76803511ee593fad7e4dad47412f0001f6a9b458a792\"" Nov 1 01:58:16.412604 containerd[1626]: time="2025-11-01T01:58:16.412570665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gnbw4.gb1.brightbox.com,Uid:972dc68a19738e9548edc6e91f62057b,Namespace:kube-system,Attempt:0,} returns sandbox id \"788e45535f014288c2f3642564abef6315495330f21588f621d0f33b448b0238\"" Nov 1 01:58:16.414477 containerd[1626]: time="2025-11-01T01:58:16.414440802Z" level=info msg="CreateContainer within sandbox \"63911b0ebb474bd338fb76803511ee593fad7e4dad47412f0001f6a9b458a792\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:58:16.421158 containerd[1626]: time="2025-11-01T01:58:16.421072869Z" level=info msg="CreateContainer within sandbox \"788e45535f014288c2f3642564abef6315495330f21588f621d0f33b448b0238\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:58:16.434452 containerd[1626]: time="2025-11-01T01:58:16.434241100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gnbw4.gb1.brightbox.com,Uid:8782202c501166a4060e598dd8655bac,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b94189b7507b204310c2c8785f677524c2e4feeba1d643fa7660bbbf1a4e733\"" Nov 1 01:58:16.440521 containerd[1626]: time="2025-11-01T01:58:16.440463650Z" level=info msg="CreateContainer within sandbox \"4b94189b7507b204310c2c8785f677524c2e4feeba1d643fa7660bbbf1a4e733\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:58:16.440778 containerd[1626]: time="2025-11-01T01:58:16.440690705Z" level=info msg="CreateContainer within sandbox \"63911b0ebb474bd338fb76803511ee593fad7e4dad47412f0001f6a9b458a792\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"973ca7c822b8827bcc6cd0f13ea81472898cee9fb7728dd1251ed84df58ab595\"" Nov 1 01:58:16.444231 containerd[1626]: time="2025-11-01T01:58:16.443174446Z" level=info msg="StartContainer for \"973ca7c822b8827bcc6cd0f13ea81472898cee9fb7728dd1251ed84df58ab595\"" Nov 1 01:58:16.448819 containerd[1626]: time="2025-11-01T01:58:16.448776820Z" level=info msg="CreateContainer within sandbox \"788e45535f014288c2f3642564abef6315495330f21588f621d0f33b448b0238\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51bd82a19f99965b50922c580c0ada1edc3984ff927e2e1481b047320245b460\"" Nov 1 01:58:16.449302 containerd[1626]: time="2025-11-01T01:58:16.449280575Z" level=info msg="StartContainer for \"51bd82a19f99965b50922c580c0ada1edc3984ff927e2e1481b047320245b460\"" Nov 1 01:58:16.451518 containerd[1626]: time="2025-11-01T01:58:16.451490487Z" level=info msg="CreateContainer within sandbox \"4b94189b7507b204310c2c8785f677524c2e4feeba1d643fa7660bbbf1a4e733\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ffc9c4381750d9c85c3ff6597a2de4d8b26dbcbbfc37d3884f57b4ce78649ac\"" Nov 1 01:58:16.451903 containerd[1626]: time="2025-11-01T01:58:16.451884600Z" level=info msg="StartContainer for \"7ffc9c4381750d9c85c3ff6597a2de4d8b26dbcbbfc37d3884f57b4ce78649ac\"" Nov 1 01:58:16.555265 kubelet[2469]: I1101 01:58:16.555197 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:16.556012 kubelet[2469]: E1101 01:58:16.555988 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.90.154:6443/api/v1/nodes\": dial tcp 10.244.90.154:6443: connect: connection refused" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:16.567158 containerd[1626]: time="2025-11-01T01:58:16.566325314Z" level=info msg="StartContainer for \"7ffc9c4381750d9c85c3ff6597a2de4d8b26dbcbbfc37d3884f57b4ce78649ac\" returns successfully" Nov 1 01:58:16.584261 containerd[1626]: time="2025-11-01T01:58:16.584003709Z" level=info msg="StartContainer for \"51bd82a19f99965b50922c580c0ada1edc3984ff927e2e1481b047320245b460\" returns successfully" Nov 1 01:58:16.608470 containerd[1626]: time="2025-11-01T01:58:16.606932364Z" level=info msg="StartContainer for \"973ca7c822b8827bcc6cd0f13ea81472898cee9fb7728dd1251ed84df58ab595\" returns successfully" Nov 1 01:58:17.021937 kubelet[2469]: E1101 01:58:17.021864 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:17.027476 kubelet[2469]: E1101 01:58:17.026385 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:17.029440 kubelet[2469]: E1101 01:58:17.029421 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.036617 kubelet[2469]: E1101 01:58:18.036576 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.038567 kubelet[2469]: E1101 01:58:18.038539 2469 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.161461 kubelet[2469]: I1101 01:58:18.161427 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.402906 kubelet[2469]: E1101 01:58:18.402849 2469 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gnbw4.gb1.brightbox.com\" not found" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.559258 kubelet[2469]: I1101 01:58:18.559121 2469 kubelet_node_status.go:78] "Successfully registered node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.559810 kubelet[2469]: E1101 01:58:18.559541 2469 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-gnbw4.gb1.brightbox.com\": node \"srv-gnbw4.gb1.brightbox.com\" not found" Nov 1 01:58:18.587266 kubelet[2469]: E1101 01:58:18.587102 2469 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" Nov 1 01:58:18.688032 kubelet[2469]: E1101 01:58:18.687853 2469 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gnbw4.gb1.brightbox.com\" not found" Nov 1 01:58:18.771465 kubelet[2469]: I1101 01:58:18.771385 2469 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.784684 kubelet[2469]: E1101 01:58:18.784622 2469 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.785397 kubelet[2469]: I1101 01:58:18.785007 2469 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.789469 kubelet[2469]: E1101 01:58:18.789242 2469 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.789469 kubelet[2469]: I1101 01:58:18.789303 2469 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.792660 kubelet[2469]: E1101 01:58:18.792627 2469 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gnbw4.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:18.939017 kubelet[2469]: I1101 01:58:18.938649 2469 apiserver.go:52] "Watching apiserver" Nov 1 01:58:18.974332 kubelet[2469]: I1101 01:58:18.974201 2469 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:58:19.529119 kubelet[2469]: I1101 01:58:19.529020 2469 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:19.540400 kubelet[2469]: W1101 01:58:19.539507 2469 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:20.964425 systemd[1]: Reloading requested from client PID 2741 ('systemctl') (unit session-9.scope)... Nov 1 01:58:20.964443 systemd[1]: Reloading... Nov 1 01:58:21.066761 zram_generator::config[2786]: No configuration found. Nov 1 01:58:21.092892 kubelet[2469]: I1101 01:58:21.091710 2469 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.112765 kubelet[2469]: W1101 01:58:21.112220 2469 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:21.212607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:58:21.306200 systemd[1]: Reloading finished in 341 ms. Nov 1 01:58:21.343870 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:58:21.357501 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:58:21.357834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:58:21.366293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:58:21.557530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:58:21.575539 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:58:21.653963 kubelet[2853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:58:21.654480 kubelet[2853]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:58:21.654520 kubelet[2853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:58:21.654671 kubelet[2853]: I1101 01:58:21.654635 2853 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:58:21.662686 kubelet[2853]: I1101 01:58:21.662641 2853 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:58:21.662686 kubelet[2853]: I1101 01:58:21.662676 2853 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:58:21.663054 kubelet[2853]: I1101 01:58:21.663020 2853 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:58:21.664581 kubelet[2853]: I1101 01:58:21.664545 2853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 01:58:21.671087 kubelet[2853]: I1101 01:58:21.670949 2853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:58:21.674703 kubelet[2853]: E1101 01:58:21.674674 2853 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:58:21.674792 kubelet[2853]: I1101 01:58:21.674784 2853 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:58:21.678095 kubelet[2853]: I1101 01:58:21.678034 2853 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:58:21.678944 kubelet[2853]: I1101 01:58:21.678611 2853 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:58:21.678944 kubelet[2853]: I1101 01:58:21.678640 2853 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gnbw4.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:58:21.678944 kubelet[2853]: I1101 01:58:21.678834 2853 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:58:21.678944 kubelet[2853]: I1101 01:58:21.678843 2853 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:58:21.681289 kubelet[2853]: I1101 01:58:21.681222 2853 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:58:21.681535 kubelet[2853]: I1101 01:58:21.681524 2853 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:58:21.681670 kubelet[2853]: I1101 01:58:21.681619 2853 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:58:21.681670 kubelet[2853]: I1101 01:58:21.681645 2853 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:58:21.681797 kubelet[2853]: I1101 01:58:21.681751 2853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:58:21.691467 kubelet[2853]: I1101 01:58:21.688668 2853 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:58:21.691467 kubelet[2853]: I1101 01:58:21.689116 2853 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:58:21.695582 kubelet[2853]: I1101 01:58:21.695515 2853 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:58:21.695582 kubelet[2853]: I1101 01:58:21.695582 2853 server.go:1287] "Started kubelet" Nov 1 01:58:21.697743 kubelet[2853]: I1101 01:58:21.697731 2853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:58:21.697853 kubelet[2853]: I1101 01:58:21.697796 2853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:58:21.698240 kubelet[2853]: I1101 01:58:21.698221 2853 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:58:21.703121 kubelet[2853]: I1101 01:58:21.703096 2853 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:58:21.709066 kubelet[2853]: I1101 01:58:21.697732 2853 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:58:21.710503 kubelet[2853]: I1101 01:58:21.710491 2853 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:58:21.713322 kubelet[2853]: I1101 01:58:21.706958 2853 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:58:21.713641 kubelet[2853]: I1101 01:58:21.706947 2853 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:58:21.714234 kubelet[2853]: I1101 01:58:21.713893 2853 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:58:21.729254 kubelet[2853]: E1101 01:58:21.729229 2853 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:58:21.730634 kubelet[2853]: I1101 01:58:21.730616 2853 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:58:21.730634 kubelet[2853]: I1101 01:58:21.730631 2853 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:58:21.730735 kubelet[2853]: I1101 01:58:21.730693 2853 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:58:21.732911 kubelet[2853]: I1101 01:58:21.732807 2853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:58:21.735383 kubelet[2853]: I1101 01:58:21.735068 2853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:58:21.735383 kubelet[2853]: I1101 01:58:21.735101 2853 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:58:21.735383 kubelet[2853]: I1101 01:58:21.735124 2853 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:58:21.735383 kubelet[2853]: I1101 01:58:21.735132 2853 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:58:21.735383 kubelet[2853]: E1101 01:58:21.735195 2853 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:58:21.805446 kubelet[2853]: I1101 01:58:21.805420 2853 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805647 2853 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805667 2853 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805838 2853 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805848 2853 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805869 2853 policy_none.go:49] "None policy: Start" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805881 2853 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.805891 2853 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.806013 2853 state_mem.go:75] "Updated machine memory state" Nov 1 01:58:21.807159 kubelet[2853]: I1101 01:58:21.807125 2853 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:58:21.807657 kubelet[2853]: I1101 01:58:21.807644 2853 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:58:21.807776 kubelet[2853]: I1101 01:58:21.807739 2853 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:58:21.808430 kubelet[2853]: I1101 01:58:21.808409 2853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:58:21.812227 kubelet[2853]: E1101 01:58:21.810882 2853 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:58:21.836584 kubelet[2853]: I1101 01:58:21.836556 2853 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.837629 kubelet[2853]: I1101 01:58:21.837502 2853 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.839251 kubelet[2853]: I1101 01:58:21.839235 2853 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.844371 kubelet[2853]: W1101 01:58:21.844353 2853 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:21.846088 kubelet[2853]: W1101 01:58:21.846071 2853 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:21.846179 kubelet[2853]: W1101 01:58:21.846102 2853 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:21.846179 kubelet[2853]: E1101 01:58:21.846134 2853 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.846273 kubelet[2853]: E1101 01:58:21.846209 2853 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gnbw4.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.918549 kubelet[2853]: I1101 01:58:21.918451 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-ca-certs\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.919030 kubelet[2853]: I1101 01:58:21.918974 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.919266 kubelet[2853]: I1101 01:58:21.919239 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8782202c501166a4060e598dd8655bac-kubeconfig\") pod \"kube-scheduler-srv-gnbw4.gb1.brightbox.com\" (UID: \"8782202c501166a4060e598dd8655bac\") " pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.919434 kubelet[2853]: I1101 01:58:21.919412 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-k8s-certs\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.919661 kubelet[2853]: I1101 01:58:21.919589 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-kubeconfig\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.919893 kubelet[2853]: I1101 01:58:21.919785 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/972dc68a19738e9548edc6e91f62057b-ca-certs\") pod \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" (UID: \"972dc68a19738e9548edc6e91f62057b\") " pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.920203 kubelet[2853]: I1101 01:58:21.920000 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/972dc68a19738e9548edc6e91f62057b-k8s-certs\") pod \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" (UID: \"972dc68a19738e9548edc6e91f62057b\") " pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.920203 kubelet[2853]: I1101 01:58:21.920086 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/972dc68a19738e9548edc6e91f62057b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" (UID: \"972dc68a19738e9548edc6e91f62057b\") " pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.920203 kubelet[2853]: I1101 01:58:21.920132 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ecf6633b58da7ed8d6a22fba171819a-flexvolume-dir\") pod \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" (UID: \"3ecf6633b58da7ed8d6a22fba171819a\") " pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.930754 kubelet[2853]: I1101 01:58:21.930688 2853 kubelet_node_status.go:75] "Attempting to register node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.953054 kubelet[2853]: I1101 01:58:21.952851 2853 kubelet_node_status.go:124] "Node was previously registered" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:21.953054 kubelet[2853]: I1101 01:58:21.952989 2853 kubelet_node_status.go:78] "Successfully registered node" node="srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:22.682908 kubelet[2853]: I1101 01:58:22.682828 2853 apiserver.go:52] "Watching apiserver" Nov 1 01:58:22.714123 kubelet[2853]: I1101 01:58:22.714035 2853 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:58:22.772932 kubelet[2853]: I1101 01:58:22.772458 2853 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:22.773170 kubelet[2853]: I1101 01:58:22.773157 2853 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:22.781689 kubelet[2853]: W1101 01:58:22.781210 2853 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:22.781689 kubelet[2853]: E1101 01:58:22.781275 2853 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gnbw4.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:22.781689 kubelet[2853]: W1101 01:58:22.781468 2853 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:58:22.781689 kubelet[2853]: E1101 01:58:22.781493 2853 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-gnbw4.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" Nov 1 01:58:22.834336 kubelet[2853]: I1101 01:58:22.834269 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gnbw4.gb1.brightbox.com" podStartSLOduration=1.834215726 podStartE2EDuration="1.834215726s" podCreationTimestamp="2025-11-01 01:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:58:22.833621666 +0000 UTC m=+1.238010015" watchObservedRunningTime="2025-11-01 01:58:22.834215726 +0000 UTC m=+1.238604067" Nov 1 01:58:22.849063 kubelet[2853]: I1101 01:58:22.848879 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gnbw4.gb1.brightbox.com" podStartSLOduration=3.848861717 podStartE2EDuration="3.848861717s" podCreationTimestamp="2025-11-01 01:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:58:22.848630498 +0000 UTC m=+1.253018848" watchObservedRunningTime="2025-11-01 01:58:22.848861717 +0000 UTC m=+1.253250059" Nov 1 01:58:22.862712 kubelet[2853]: I1101 01:58:22.862615 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gnbw4.gb1.brightbox.com" podStartSLOduration=1.862584078 podStartE2EDuration="1.862584078s" podCreationTimestamp="2025-11-01 01:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:58:22.860577825 +0000 UTC m=+1.264966272" watchObservedRunningTime="2025-11-01 01:58:22.862584078 +0000 UTC m=+1.266972493" Nov 1 01:58:26.784309 kubelet[2853]: I1101 01:58:26.784249 2853 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:58:26.785093 kubelet[2853]: I1101 01:58:26.785074 2853 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:58:26.785686 containerd[1626]: time="2025-11-01T01:58:26.784857729Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:58:27.557166 kubelet[2853]: I1101 01:58:27.555635 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a104873-d5a1-4387-9135-4d22b1e42589-xtables-lock\") pod \"kube-proxy-654t7\" (UID: \"5a104873-d5a1-4387-9135-4d22b1e42589\") " pod="kube-system/kube-proxy-654t7" Nov 1 01:58:27.557166 kubelet[2853]: I1101 01:58:27.555690 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a104873-d5a1-4387-9135-4d22b1e42589-lib-modules\") pod \"kube-proxy-654t7\" (UID: \"5a104873-d5a1-4387-9135-4d22b1e42589\") " pod="kube-system/kube-proxy-654t7" Nov 1 01:58:27.557166 kubelet[2853]: I1101 01:58:27.555713 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xq7j\" (UniqueName: \"kubernetes.io/projected/5a104873-d5a1-4387-9135-4d22b1e42589-kube-api-access-8xq7j\") pod \"kube-proxy-654t7\" (UID: \"5a104873-d5a1-4387-9135-4d22b1e42589\") " pod="kube-system/kube-proxy-654t7" Nov 1 01:58:27.557166 kubelet[2853]: I1101 01:58:27.555738 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a104873-d5a1-4387-9135-4d22b1e42589-kube-proxy\") pod \"kube-proxy-654t7\" (UID: \"5a104873-d5a1-4387-9135-4d22b1e42589\") " pod="kube-system/kube-proxy-654t7" Nov 1 01:58:27.775210 containerd[1626]: time="2025-11-01T01:58:27.774445241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-654t7,Uid:5a104873-d5a1-4387-9135-4d22b1e42589,Namespace:kube-system,Attempt:0,}" Nov 1 01:58:27.825332 containerd[1626]: time="2025-11-01T01:58:27.821173730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:27.825332 containerd[1626]: time="2025-11-01T01:58:27.823436124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:27.825332 containerd[1626]: time="2025-11-01T01:58:27.823453061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:27.825332 containerd[1626]: time="2025-11-01T01:58:27.823570071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:27.860099 systemd[1]: run-containerd-runc-k8s.io-57a05e7f97870cfae315816dba0a0138a90a41ae8506d5141491447ec1b24140-runc.wX9wow.mount: Deactivated successfully. Nov 1 01:58:27.886044 containerd[1626]: time="2025-11-01T01:58:27.886002489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-654t7,Uid:5a104873-d5a1-4387-9135-4d22b1e42589,Namespace:kube-system,Attempt:0,} returns sandbox id \"57a05e7f97870cfae315816dba0a0138a90a41ae8506d5141491447ec1b24140\"" Nov 1 01:58:27.890779 containerd[1626]: time="2025-11-01T01:58:27.890678384Z" level=info msg="CreateContainer within sandbox \"57a05e7f97870cfae315816dba0a0138a90a41ae8506d5141491447ec1b24140\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:58:27.904760 containerd[1626]: time="2025-11-01T01:58:27.904597143Z" level=info msg="CreateContainer within sandbox \"57a05e7f97870cfae315816dba0a0138a90a41ae8506d5141491447ec1b24140\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc806e64d6f97e1f9e426a99b4e272e2ec4c49aa1267cecd14105eef0dcb5f34\"" Nov 1 01:58:27.908609 containerd[1626]: time="2025-11-01T01:58:27.908558813Z" level=info msg="StartContainer for \"bc806e64d6f97e1f9e426a99b4e272e2ec4c49aa1267cecd14105eef0dcb5f34\"" Nov 1 01:58:28.018599 containerd[1626]: time="2025-11-01T01:58:28.018520863Z" level=info msg="StartContainer for \"bc806e64d6f97e1f9e426a99b4e272e2ec4c49aa1267cecd14105eef0dcb5f34\" returns successfully" Nov 1 01:58:28.061472 kubelet[2853]: I1101 01:58:28.061362 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77d68\" (UniqueName: \"kubernetes.io/projected/c6cf54f6-0d6d-42ff-b059-076ab0626748-kube-api-access-77d68\") pod \"tigera-operator-7dcd859c48-pvbd4\" (UID: \"c6cf54f6-0d6d-42ff-b059-076ab0626748\") " pod="tigera-operator/tigera-operator-7dcd859c48-pvbd4" Nov 1 01:58:28.061472 kubelet[2853]: I1101 01:58:28.061424 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6cf54f6-0d6d-42ff-b059-076ab0626748-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pvbd4\" (UID: \"c6cf54f6-0d6d-42ff-b059-076ab0626748\") " pod="tigera-operator/tigera-operator-7dcd859c48-pvbd4" Nov 1 01:58:28.284765 containerd[1626]: time="2025-11-01T01:58:28.284682891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pvbd4,Uid:c6cf54f6-0d6d-42ff-b059-076ab0626748,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:58:28.332361 containerd[1626]: time="2025-11-01T01:58:28.331950862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:28.332361 containerd[1626]: time="2025-11-01T01:58:28.332034998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:28.332361 containerd[1626]: time="2025-11-01T01:58:28.332049935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:28.332361 containerd[1626]: time="2025-11-01T01:58:28.332145370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:28.406027 containerd[1626]: time="2025-11-01T01:58:28.405963729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pvbd4,Uid:c6cf54f6-0d6d-42ff-b059-076ab0626748,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b58dc8004ea24528e5c71854feb073b3a1a8091bf36192059b8ed15a68702c58\"" Nov 1 01:58:28.411419 containerd[1626]: time="2025-11-01T01:58:28.411380816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:58:28.830418 kubelet[2853]: I1101 01:58:28.829298 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-654t7" podStartSLOduration=1.829276518 podStartE2EDuration="1.829276518s" podCreationTimestamp="2025-11-01 01:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:58:28.828211881 +0000 UTC m=+7.232600233" watchObservedRunningTime="2025-11-01 01:58:28.829276518 +0000 UTC m=+7.233664875" Nov 1 01:58:30.136811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282278070.mount: Deactivated successfully. Nov 1 01:58:30.773772 containerd[1626]: time="2025-11-01T01:58:30.773716297Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:30.774807 containerd[1626]: time="2025-11-01T01:58:30.774521780Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 01:58:30.775221 containerd[1626]: time="2025-11-01T01:58:30.775197509Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:30.778314 containerd[1626]: time="2025-11-01T01:58:30.777335238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:30.778314 containerd[1626]: time="2025-11-01T01:58:30.778163265Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.366495491s" Nov 1 01:58:30.778314 containerd[1626]: time="2025-11-01T01:58:30.778199280Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:58:30.781836 containerd[1626]: time="2025-11-01T01:58:30.781097960Z" level=info msg="CreateContainer within sandbox \"b58dc8004ea24528e5c71854feb073b3a1a8091bf36192059b8ed15a68702c58\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:58:30.794400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883134678.mount: Deactivated successfully. Nov 1 01:58:30.800104 containerd[1626]: time="2025-11-01T01:58:30.800058763Z" level=info msg="CreateContainer within sandbox \"b58dc8004ea24528e5c71854feb073b3a1a8091bf36192059b8ed15a68702c58\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d719bfd1ffb77050dcacf51bfcd0d10ead0d99be27e418cdf170408412582aea\"" Nov 1 01:58:30.800603 containerd[1626]: time="2025-11-01T01:58:30.800581611Z" level=info msg="StartContainer for \"d719bfd1ffb77050dcacf51bfcd0d10ead0d99be27e418cdf170408412582aea\"" Nov 1 01:58:30.883561 containerd[1626]: time="2025-11-01T01:58:30.883513394Z" level=info msg="StartContainer for \"d719bfd1ffb77050dcacf51bfcd0d10ead0d99be27e418cdf170408412582aea\" returns successfully" Nov 1 01:58:32.381195 kubelet[2853]: I1101 01:58:32.380943 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pvbd4" podStartSLOduration=3.011817139 podStartE2EDuration="5.380922142s" podCreationTimestamp="2025-11-01 01:58:27 +0000 UTC" firstStartedPulling="2025-11-01 01:58:28.410218272 +0000 UTC m=+6.814606604" lastFinishedPulling="2025-11-01 01:58:30.779323272 +0000 UTC m=+9.183711607" observedRunningTime="2025-11-01 01:58:31.824397121 +0000 UTC m=+10.228785501" watchObservedRunningTime="2025-11-01 01:58:32.380922142 +0000 UTC m=+10.785310530" Nov 1 01:58:38.091694 sudo[1896]: pam_unix(sudo:session): session closed for user root Nov 1 01:58:38.238936 sshd[1892]: pam_unix(sshd:session): session closed for user core Nov 1 01:58:38.249219 systemd[1]: sshd@6-10.244.90.154:22-147.75.109.163:33958.service: Deactivated successfully. Nov 1 01:58:38.257056 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:58:38.259924 systemd-logind[1594]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:58:38.267408 systemd-logind[1594]: Removed session 9. Nov 1 01:58:44.294028 kubelet[2853]: I1101 01:58:44.293756 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9badd81-653c-459a-beaa-cc9f4d22192e-typha-certs\") pod \"calico-typha-8c6fc94d4-bdlj5\" (UID: \"f9badd81-653c-459a-beaa-cc9f4d22192e\") " pod="calico-system/calico-typha-8c6fc94d4-bdlj5" Nov 1 01:58:44.294028 kubelet[2853]: I1101 01:58:44.293812 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9badd81-653c-459a-beaa-cc9f4d22192e-tigera-ca-bundle\") pod \"calico-typha-8c6fc94d4-bdlj5\" (UID: \"f9badd81-653c-459a-beaa-cc9f4d22192e\") " pod="calico-system/calico-typha-8c6fc94d4-bdlj5" Nov 1 01:58:44.294028 kubelet[2853]: I1101 01:58:44.293839 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzhb\" (UniqueName: \"kubernetes.io/projected/f9badd81-653c-459a-beaa-cc9f4d22192e-kube-api-access-tlzhb\") pod \"calico-typha-8c6fc94d4-bdlj5\" (UID: \"f9badd81-653c-459a-beaa-cc9f4d22192e\") " pod="calico-system/calico-typha-8c6fc94d4-bdlj5" Nov 1 01:58:44.394673 kubelet[2853]: I1101 01:58:44.394618 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-policysync\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394852 kubelet[2853]: I1101 01:58:44.394668 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-cni-bin-dir\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394852 kubelet[2853]: I1101 01:58:44.394738 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-var-run-calico\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394852 kubelet[2853]: I1101 01:58:44.394796 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-lib-modules\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394852 kubelet[2853]: I1101 01:58:44.394814 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63ce7e93-f6a5-4ca9-b480-90a667651001-tigera-ca-bundle\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394852 kubelet[2853]: I1101 01:58:44.394830 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvbs2\" (UniqueName: \"kubernetes.io/projected/63ce7e93-f6a5-4ca9-b480-90a667651001-kube-api-access-xvbs2\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394988 kubelet[2853]: I1101 01:58:44.394859 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-var-lib-calico\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394988 kubelet[2853]: I1101 01:58:44.394875 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-cni-log-dir\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394988 kubelet[2853]: I1101 01:58:44.394890 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-flexvol-driver-host\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394988 kubelet[2853]: I1101 01:58:44.394907 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/63ce7e93-f6a5-4ca9-b480-90a667651001-node-certs\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.394988 kubelet[2853]: I1101 01:58:44.394926 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-cni-net-dir\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.395897 kubelet[2853]: I1101 01:58:44.394942 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63ce7e93-f6a5-4ca9-b480-90a667651001-xtables-lock\") pod \"calico-node-qfn6x\" (UID: \"63ce7e93-f6a5-4ca9-b480-90a667651001\") " pod="calico-system/calico-node-qfn6x" Nov 1 01:58:44.501324 kubelet[2853]: E1101 01:58:44.500068 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.501324 kubelet[2853]: W1101 01:58:44.500102 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.502542 kubelet[2853]: E1101 01:58:44.502214 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.503932 kubelet[2853]: E1101 01:58:44.503632 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.503932 kubelet[2853]: W1101 01:58:44.503649 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.503932 kubelet[2853]: E1101 01:58:44.503669 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.504719 kubelet[2853]: E1101 01:58:44.504376 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.504719 kubelet[2853]: W1101 01:58:44.504475 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.504719 kubelet[2853]: E1101 01:58:44.504491 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.508032 kubelet[2853]: E1101 01:58:44.508019 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.509692 kubelet[2853]: W1101 01:58:44.509652 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.509918 kubelet[2853]: E1101 01:58:44.509810 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.511452 kubelet[2853]: E1101 01:58:44.511436 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.511643 kubelet[2853]: W1101 01:58:44.511525 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.511643 kubelet[2853]: E1101 01:58:44.511544 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.511785 kubelet[2853]: E1101 01:58:44.511776 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.514087 kubelet[2853]: W1101 01:58:44.513513 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.514087 kubelet[2853]: E1101 01:58:44.513537 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.514671 kubelet[2853]: E1101 01:58:44.514269 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.514671 kubelet[2853]: W1101 01:58:44.514281 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.514671 kubelet[2853]: E1101 01:58:44.514293 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.518596 kubelet[2853]: E1101 01:58:44.515803 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.518596 kubelet[2853]: W1101 01:58:44.515817 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.518596 kubelet[2853]: E1101 01:58:44.515829 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.519716 kubelet[2853]: E1101 01:58:44.519435 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.519716 kubelet[2853]: W1101 01:58:44.519450 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.519716 kubelet[2853]: E1101 01:58:44.519463 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.520625 kubelet[2853]: E1101 01:58:44.520275 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.520625 kubelet[2853]: W1101 01:58:44.520290 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.520625 kubelet[2853]: E1101 01:58:44.520302 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.523423 kubelet[2853]: E1101 01:58:44.523313 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.523988 kubelet[2853]: W1101 01:58:44.523757 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.523988 kubelet[2853]: E1101 01:58:44.523780 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.530492 kubelet[2853]: E1101 01:58:44.526698 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.530492 kubelet[2853]: W1101 01:58:44.526710 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.530492 kubelet[2853]: E1101 01:58:44.527862 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.530492 kubelet[2853]: E1101 01:58:44.529005 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:44.533091 kubelet[2853]: E1101 01:58:44.533076 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.533537 kubelet[2853]: W1101 01:58:44.533505 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.533622 kubelet[2853]: E1101 01:58:44.533611 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.534969 kubelet[2853]: E1101 01:58:44.534955 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.535069 kubelet[2853]: W1101 01:58:44.535058 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.535133 kubelet[2853]: E1101 01:58:44.535123 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.536749 containerd[1626]: time="2025-11-01T01:58:44.536561889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c6fc94d4-bdlj5,Uid:f9badd81-653c-459a-beaa-cc9f4d22192e,Namespace:calico-system,Attempt:0,}" Nov 1 01:58:44.595762 kubelet[2853]: E1101 01:58:44.593854 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.595762 kubelet[2853]: W1101 01:58:44.593876 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.595762 kubelet[2853]: E1101 01:58:44.593901 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.596588 kubelet[2853]: E1101 01:58:44.596187 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.596588 kubelet[2853]: W1101 01:58:44.596223 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.596588 kubelet[2853]: E1101 01:58:44.596273 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.601569 kubelet[2853]: E1101 01:58:44.601280 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.601569 kubelet[2853]: W1101 01:58:44.601299 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.601569 kubelet[2853]: E1101 01:58:44.601433 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.607220 kubelet[2853]: E1101 01:58:44.606970 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.607220 kubelet[2853]: W1101 01:58:44.606988 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.607220 kubelet[2853]: E1101 01:58:44.607006 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.607875 kubelet[2853]: E1101 01:58:44.607862 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.608067 kubelet[2853]: W1101 01:58:44.607995 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.608067 kubelet[2853]: E1101 01:58:44.608012 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.609497 kubelet[2853]: E1101 01:58:44.609387 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.609497 kubelet[2853]: W1101 01:58:44.609401 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.609497 kubelet[2853]: E1101 01:58:44.609414 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.610067 kubelet[2853]: E1101 01:58:44.609887 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.610067 kubelet[2853]: W1101 01:58:44.609900 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.610067 kubelet[2853]: E1101 01:58:44.609915 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.614183 kubelet[2853]: E1101 01:58:44.614169 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.614314 kubelet[2853]: W1101 01:58:44.614245 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.614314 kubelet[2853]: E1101 01:58:44.614260 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.615088 kubelet[2853]: E1101 01:58:44.614956 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.615088 kubelet[2853]: W1101 01:58:44.614970 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.615088 kubelet[2853]: E1101 01:58:44.614984 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.616407 kubelet[2853]: E1101 01:58:44.616318 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.618307 kubelet[2853]: W1101 01:58:44.618215 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.618307 kubelet[2853]: E1101 01:58:44.618238 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.620028 kubelet[2853]: E1101 01:58:44.619955 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.620028 kubelet[2853]: W1101 01:58:44.619969 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.620028 kubelet[2853]: E1101 01:58:44.619982 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.620793 kubelet[2853]: E1101 01:58:44.620687 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.620793 kubelet[2853]: W1101 01:58:44.620700 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.620793 kubelet[2853]: E1101 01:58:44.620712 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.623185 kubelet[2853]: E1101 01:58:44.623095 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.623185 kubelet[2853]: W1101 01:58:44.623109 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.623185 kubelet[2853]: E1101 01:58:44.623122 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.626853 kubelet[2853]: E1101 01:58:44.626402 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.626853 kubelet[2853]: W1101 01:58:44.626418 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.626853 kubelet[2853]: E1101 01:58:44.626444 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.628245 kubelet[2853]: E1101 01:58:44.627539 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.628245 kubelet[2853]: W1101 01:58:44.627554 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.628245 kubelet[2853]: E1101 01:58:44.627566 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.630104 kubelet[2853]: E1101 01:58:44.630048 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.630104 kubelet[2853]: W1101 01:58:44.630061 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.630104 kubelet[2853]: E1101 01:58:44.630073 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.630854 kubelet[2853]: E1101 01:58:44.630788 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.630854 kubelet[2853]: W1101 01:58:44.630801 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.630854 kubelet[2853]: E1101 01:58:44.630815 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.631297 kubelet[2853]: E1101 01:58:44.631198 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.631297 kubelet[2853]: W1101 01:58:44.631210 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.631297 kubelet[2853]: E1101 01:58:44.631222 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.631596 kubelet[2853]: E1101 01:58:44.631529 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.631596 kubelet[2853]: W1101 01:58:44.631543 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.631596 kubelet[2853]: E1101 01:58:44.631554 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.632049 kubelet[2853]: E1101 01:58:44.631995 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.632049 kubelet[2853]: W1101 01:58:44.632013 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.632049 kubelet[2853]: E1101 01:58:44.632025 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.632676 kubelet[2853]: E1101 01:58:44.632532 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.632676 kubelet[2853]: W1101 01:58:44.632544 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.632676 kubelet[2853]: E1101 01:58:44.632556 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.632676 kubelet[2853]: I1101 01:58:44.632590 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4caf741f-c22d-4e76-9e9d-18f81ca6bba2-registration-dir\") pod \"csi-node-driver-b5qvt\" (UID: \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\") " pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:44.632978 kubelet[2853]: E1101 01:58:44.632825 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.633426 kubelet[2853]: W1101 01:58:44.633172 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.633426 kubelet[2853]: E1101 01:58:44.633200 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.633426 kubelet[2853]: I1101 01:58:44.633223 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4caf741f-c22d-4e76-9e9d-18f81ca6bba2-varrun\") pod \"csi-node-driver-b5qvt\" (UID: \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\") " pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:44.634034 kubelet[2853]: E1101 01:58:44.634022 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.634220 kubelet[2853]: W1101 01:58:44.634084 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.634220 kubelet[2853]: E1101 01:58:44.634107 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.635433 kubelet[2853]: E1101 01:58:44.635420 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.636243 kubelet[2853]: W1101 01:58:44.635466 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.636243 kubelet[2853]: E1101 01:58:44.636072 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.636623 kubelet[2853]: E1101 01:58:44.636595 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.636796 kubelet[2853]: W1101 01:58:44.636609 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.636796 kubelet[2853]: E1101 01:58:44.636732 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.636796 kubelet[2853]: I1101 01:58:44.636756 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4caf741f-c22d-4e76-9e9d-18f81ca6bba2-kubelet-dir\") pod \"csi-node-driver-b5qvt\" (UID: \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\") " pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:44.638464 kubelet[2853]: E1101 01:58:44.638295 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.639270 kubelet[2853]: W1101 01:58:44.638784 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.639876 kubelet[2853]: E1101 01:58:44.639849 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.641716 kubelet[2853]: E1101 01:58:44.640251 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.641716 kubelet[2853]: W1101 01:58:44.640268 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.641716 kubelet[2853]: E1101 01:58:44.640284 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.641716 kubelet[2853]: I1101 01:58:44.640305 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4caf741f-c22d-4e76-9e9d-18f81ca6bba2-socket-dir\") pod \"csi-node-driver-b5qvt\" (UID: \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\") " pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:44.647793 kubelet[2853]: E1101 01:58:44.647446 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.647793 kubelet[2853]: W1101 01:58:44.647652 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.647793 kubelet[2853]: E1101 01:58:44.647742 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.650003 kubelet[2853]: E1101 01:58:44.649502 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.650003 kubelet[2853]: W1101 01:58:44.649531 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.650003 kubelet[2853]: E1101 01:58:44.649546 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.650003 kubelet[2853]: E1101 01:58:44.649889 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.650003 kubelet[2853]: W1101 01:58:44.649902 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.650003 kubelet[2853]: E1101 01:58:44.649920 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.651424 kubelet[2853]: E1101 01:58:44.650111 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.651424 kubelet[2853]: W1101 01:58:44.650119 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.651424 kubelet[2853]: E1101 01:58:44.650146 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.651424 kubelet[2853]: E1101 01:58:44.650367 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.651424 kubelet[2853]: W1101 01:58:44.650376 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.651424 kubelet[2853]: E1101 01:58:44.650386 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.651424 kubelet[2853]: E1101 01:58:44.650676 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.651424 kubelet[2853]: W1101 01:58:44.650724 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.651424 kubelet[2853]: E1101 01:58:44.650737 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.651705 kubelet[2853]: I1101 01:58:44.650760 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j25c\" (UniqueName: \"kubernetes.io/projected/4caf741f-c22d-4e76-9e9d-18f81ca6bba2-kube-api-access-4j25c\") pod \"csi-node-driver-b5qvt\" (UID: \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\") " pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:44.651705 kubelet[2853]: E1101 01:58:44.651029 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.651705 kubelet[2853]: W1101 01:58:44.651039 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.651705 kubelet[2853]: E1101 01:58:44.651051 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.651705 kubelet[2853]: E1101 01:58:44.651248 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.651705 kubelet[2853]: W1101 01:58:44.651258 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.651705 kubelet[2853]: E1101 01:58:44.651267 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.653804 containerd[1626]: time="2025-11-01T01:58:44.653355910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qfn6x,Uid:63ce7e93-f6a5-4ca9-b480-90a667651001,Namespace:calico-system,Attempt:0,}" Nov 1 01:58:44.661833 containerd[1626]: time="2025-11-01T01:58:44.660748074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:44.661833 containerd[1626]: time="2025-11-01T01:58:44.660827134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:44.661833 containerd[1626]: time="2025-11-01T01:58:44.660841674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:44.661833 containerd[1626]: time="2025-11-01T01:58:44.660951500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:44.705223 containerd[1626]: time="2025-11-01T01:58:44.705076915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:58:44.705223 containerd[1626]: time="2025-11-01T01:58:44.705172483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:58:44.705223 containerd[1626]: time="2025-11-01T01:58:44.705188008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:44.705639 containerd[1626]: time="2025-11-01T01:58:44.705606511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:58:44.753730 kubelet[2853]: E1101 01:58:44.753695 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.753730 kubelet[2853]: W1101 01:58:44.753719 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.753933 kubelet[2853]: E1101 01:58:44.753744 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.753989 kubelet[2853]: E1101 01:58:44.753978 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.754029 kubelet[2853]: W1101 01:58:44.753990 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.754029 kubelet[2853]: E1101 01:58:44.754001 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.754194 kubelet[2853]: E1101 01:58:44.754182 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.754194 kubelet[2853]: W1101 01:58:44.754192 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.754291 kubelet[2853]: E1101 01:58:44.754214 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.754435 kubelet[2853]: E1101 01:58:44.754424 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.754435 kubelet[2853]: W1101 01:58:44.754435 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.754559 kubelet[2853]: E1101 01:58:44.754448 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.754921 kubelet[2853]: E1101 01:58:44.754906 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.754921 kubelet[2853]: W1101 01:58:44.754920 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.755160 kubelet[2853]: E1101 01:58:44.754950 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.755314 kubelet[2853]: E1101 01:58:44.755298 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.755520 kubelet[2853]: W1101 01:58:44.755393 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.755520 kubelet[2853]: E1101 01:58:44.755417 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.755642 kubelet[2853]: E1101 01:58:44.755635 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.755704 kubelet[2853]: W1101 01:58:44.755696 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.755818 kubelet[2853]: E1101 01:58:44.755767 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.755929 kubelet[2853]: E1101 01:58:44.755921 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.756116 kubelet[2853]: W1101 01:58:44.756016 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.756116 kubelet[2853]: E1101 01:58:44.756061 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.756267 kubelet[2853]: E1101 01:58:44.756259 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.756363 kubelet[2853]: W1101 01:58:44.756354 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.756444 kubelet[2853]: E1101 01:58:44.756428 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.756777 kubelet[2853]: E1101 01:58:44.756766 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.756923 kubelet[2853]: W1101 01:58:44.756840 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.756923 kubelet[2853]: E1101 01:58:44.756863 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.757303 kubelet[2853]: E1101 01:58:44.757204 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.757303 kubelet[2853]: W1101 01:58:44.757215 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.757303 kubelet[2853]: E1101 01:58:44.757232 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.757754 kubelet[2853]: E1101 01:58:44.757653 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.757754 kubelet[2853]: W1101 01:58:44.757664 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.757754 kubelet[2853]: E1101 01:58:44.757700 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.758389 kubelet[2853]: E1101 01:58:44.758280 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.758389 kubelet[2853]: W1101 01:58:44.758292 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.758389 kubelet[2853]: E1101 01:58:44.758315 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.759394 kubelet[2853]: E1101 01:58:44.759279 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.759394 kubelet[2853]: W1101 01:58:44.759293 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.759394 kubelet[2853]: E1101 01:58:44.759326 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.759808 kubelet[2853]: E1101 01:58:44.759704 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.759808 kubelet[2853]: W1101 01:58:44.759718 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.759808 kubelet[2853]: E1101 01:58:44.759744 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.760266 kubelet[2853]: E1101 01:58:44.760030 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.760266 kubelet[2853]: W1101 01:58:44.760044 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.760266 kubelet[2853]: E1101 01:58:44.760067 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.760695 kubelet[2853]: E1101 01:58:44.760682 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.760835 kubelet[2853]: W1101 01:58:44.760754 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.760835 kubelet[2853]: E1101 01:58:44.760785 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.762054 kubelet[2853]: E1101 01:58:44.761105 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.762054 kubelet[2853]: W1101 01:58:44.761961 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.762054 kubelet[2853]: E1101 01:58:44.761999 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.763159 kubelet[2853]: E1101 01:58:44.763003 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.763159 kubelet[2853]: W1101 01:58:44.763017 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.763159 kubelet[2853]: E1101 01:58:44.763046 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.763783 kubelet[2853]: E1101 01:58:44.763493 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.763783 kubelet[2853]: W1101 01:58:44.763507 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.763783 kubelet[2853]: E1101 01:58:44.763535 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.764702 kubelet[2853]: E1101 01:58:44.764489 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.764702 kubelet[2853]: W1101 01:58:44.764503 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.764702 kubelet[2853]: E1101 01:58:44.764530 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.765831 kubelet[2853]: E1101 01:58:44.765816 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.767516 kubelet[2853]: W1101 01:58:44.767163 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.768795 kubelet[2853]: E1101 01:58:44.768504 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.768795 kubelet[2853]: E1101 01:58:44.768596 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.768795 kubelet[2853]: W1101 01:58:44.768604 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.769670 kubelet[2853]: E1101 01:58:44.769657 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.769970 kubelet[2853]: W1101 01:58:44.769852 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.772450 kubelet[2853]: E1101 01:58:44.772189 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.772450 kubelet[2853]: W1101 01:58:44.772202 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.772450 kubelet[2853]: E1101 01:58:44.772214 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.772450 kubelet[2853]: E1101 01:58:44.772239 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.772450 kubelet[2853]: E1101 01:58:44.772250 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.785871 kubelet[2853]: E1101 01:58:44.785198 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:58:44.786521 kubelet[2853]: W1101 01:58:44.786358 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:58:44.786521 kubelet[2853]: E1101 01:58:44.786432 2853 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:58:44.794839 containerd[1626]: time="2025-11-01T01:58:44.794796055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qfn6x,Uid:63ce7e93-f6a5-4ca9-b480-90a667651001,Namespace:calico-system,Attempt:0,} returns sandbox id \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\"" Nov 1 01:58:44.798439 containerd[1626]: time="2025-11-01T01:58:44.796473286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c6fc94d4-bdlj5,Uid:f9badd81-653c-459a-beaa-cc9f4d22192e,Namespace:calico-system,Attempt:0,} returns sandbox id \"76e009e30449675beea9fc8ef0a75b8c0d1920e3ee4a1fbff54462d821dc8a3f\"" Nov 1 01:58:44.799804 containerd[1626]: time="2025-11-01T01:58:44.799594996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:58:46.356371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695651205.mount: Deactivated successfully. Nov 1 01:58:46.495105 containerd[1626]: time="2025-11-01T01:58:46.495021382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:46.497972 containerd[1626]: time="2025-11-01T01:58:46.497885372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 1 01:58:46.500891 containerd[1626]: time="2025-11-01T01:58:46.500128521Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:46.503881 containerd[1626]: time="2025-11-01T01:58:46.503720445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:46.505975 containerd[1626]: time="2025-11-01T01:58:46.505924337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.706135247s" Nov 1 01:58:46.505975 containerd[1626]: time="2025-11-01T01:58:46.505976943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:58:46.510705 containerd[1626]: time="2025-11-01T01:58:46.509585519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:58:46.511310 containerd[1626]: time="2025-11-01T01:58:46.511129960Z" level=info msg="CreateContainer within sandbox \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:58:46.523399 containerd[1626]: time="2025-11-01T01:58:46.522614215Z" level=info msg="CreateContainer within sandbox \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78\"" Nov 1 01:58:46.525624 containerd[1626]: time="2025-11-01T01:58:46.524881051Z" level=info msg="StartContainer for \"be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78\"" Nov 1 01:58:46.575749 systemd[1]: run-containerd-runc-k8s.io-be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78-runc.qv9QIv.mount: Deactivated successfully. Nov 1 01:58:46.618244 containerd[1626]: time="2025-11-01T01:58:46.617424495Z" level=info msg="StartContainer for \"be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78\" returns successfully" Nov 1 01:58:46.676393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78-rootfs.mount: Deactivated successfully. Nov 1 01:58:46.706833 containerd[1626]: time="2025-11-01T01:58:46.680652139Z" level=info msg="shim disconnected" id=be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78 namespace=k8s.io Nov 1 01:58:46.706833 containerd[1626]: time="2025-11-01T01:58:46.706624801Z" level=warning msg="cleaning up after shim disconnected" id=be0225a20fe7f7c0e92870b8cc9c51d6043355f427ab0d7cf2631e5cc041ba78 namespace=k8s.io Nov 1 01:58:46.706833 containerd[1626]: time="2025-11-01T01:58:46.706651475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:58:46.735810 kubelet[2853]: E1101 01:58:46.735742 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:48.735634 kubelet[2853]: E1101 01:58:48.735568 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:49.047123 containerd[1626]: time="2025-11-01T01:58:49.046748294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:49.050576 containerd[1626]: time="2025-11-01T01:58:49.050474497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 1 01:58:49.050743 containerd[1626]: time="2025-11-01T01:58:49.050647548Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:49.056457 containerd[1626]: time="2025-11-01T01:58:49.056185485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:49.071856 containerd[1626]: time="2025-11-01T01:58:49.071717795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.56209321s" Nov 1 01:58:49.071856 containerd[1626]: time="2025-11-01T01:58:49.071757256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:58:49.073488 containerd[1626]: time="2025-11-01T01:58:49.073303744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:58:49.096749 containerd[1626]: time="2025-11-01T01:58:49.096707912Z" level=info msg="CreateContainer within sandbox \"76e009e30449675beea9fc8ef0a75b8c0d1920e3ee4a1fbff54462d821dc8a3f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:58:49.112081 containerd[1626]: time="2025-11-01T01:58:49.111704043Z" level=info msg="CreateContainer within sandbox \"76e009e30449675beea9fc8ef0a75b8c0d1920e3ee4a1fbff54462d821dc8a3f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c4d327d70fc0b55c3acf6a3919bb6abbc970e0104fa0ee940a8b9936c22f7fe\"" Nov 1 01:58:49.112499 containerd[1626]: time="2025-11-01T01:58:49.112387637Z" level=info msg="StartContainer for \"4c4d327d70fc0b55c3acf6a3919bb6abbc970e0104fa0ee940a8b9936c22f7fe\"" Nov 1 01:58:49.225589 containerd[1626]: time="2025-11-01T01:58:49.225528330Z" level=info msg="StartContainer for \"4c4d327d70fc0b55c3acf6a3919bb6abbc970e0104fa0ee940a8b9936c22f7fe\" returns successfully" Nov 1 01:58:50.027239 kubelet[2853]: I1101 01:58:50.027172 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8c6fc94d4-bdlj5" podStartSLOduration=1.7561527300000002 podStartE2EDuration="6.027056431s" podCreationTimestamp="2025-11-01 01:58:44 +0000 UTC" firstStartedPulling="2025-11-01 01:58:44.801938584 +0000 UTC m=+23.206326914" lastFinishedPulling="2025-11-01 01:58:49.072842268 +0000 UTC m=+27.477230615" observedRunningTime="2025-11-01 01:58:50.026243304 +0000 UTC m=+28.430631660" watchObservedRunningTime="2025-11-01 01:58:50.027056431 +0000 UTC m=+28.431444783" Nov 1 01:58:50.736383 kubelet[2853]: E1101 01:58:50.736282 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:51.009740 kubelet[2853]: I1101 01:58:51.009630 2853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:58:52.735754 kubelet[2853]: E1101 01:58:52.735695 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:53.050543 containerd[1626]: time="2025-11-01T01:58:53.050196839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:53.051930 containerd[1626]: time="2025-11-01T01:58:53.051216650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 01:58:53.051930 containerd[1626]: time="2025-11-01T01:58:53.051882286Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:53.054860 containerd[1626]: time="2025-11-01T01:58:53.053979912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:58:53.054860 containerd[1626]: time="2025-11-01T01:58:53.054729544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.981396091s" Nov 1 01:58:53.054860 containerd[1626]: time="2025-11-01T01:58:53.054761076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:58:53.057975 containerd[1626]: time="2025-11-01T01:58:53.057943812Z" level=info msg="CreateContainer within sandbox \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:58:53.086204 containerd[1626]: time="2025-11-01T01:58:53.086158971Z" level=info msg="CreateContainer within sandbox \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"53529f8e7b6b777f4596ae0825007c3153c06c557c2e687727c64da9c78095bc\"" Nov 1 01:58:53.089406 containerd[1626]: time="2025-11-01T01:58:53.086952714Z" level=info msg="StartContainer for \"53529f8e7b6b777f4596ae0825007c3153c06c557c2e687727c64da9c78095bc\"" Nov 1 01:58:53.159125 containerd[1626]: time="2025-11-01T01:58:53.159080435Z" level=info msg="StartContainer for \"53529f8e7b6b777f4596ae0825007c3153c06c557c2e687727c64da9c78095bc\" returns successfully" Nov 1 01:58:53.921574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53529f8e7b6b777f4596ae0825007c3153c06c557c2e687727c64da9c78095bc-rootfs.mount: Deactivated successfully. Nov 1 01:58:53.931036 containerd[1626]: time="2025-11-01T01:58:53.925237136Z" level=info msg="shim disconnected" id=53529f8e7b6b777f4596ae0825007c3153c06c557c2e687727c64da9c78095bc namespace=k8s.io Nov 1 01:58:53.931221 containerd[1626]: time="2025-11-01T01:58:53.931041336Z" level=warning msg="cleaning up after shim disconnected" id=53529f8e7b6b777f4596ae0825007c3153c06c557c2e687727c64da9c78095bc namespace=k8s.io Nov 1 01:58:53.931221 containerd[1626]: time="2025-11-01T01:58:53.931062364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:58:53.996560 kubelet[2853]: I1101 01:58:53.994995 2853 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:58:54.051091 containerd[1626]: time="2025-11-01T01:58:54.048995087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:58:54.150716 kubelet[2853]: I1101 01:58:54.150600 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2ab813-9398-4622-9019-515028818713-goldmane-ca-bundle\") pod \"goldmane-666569f655-qxr6w\" (UID: \"7d2ab813-9398-4622-9019-515028818713\") " pod="calico-system/goldmane-666569f655-qxr6w" Nov 1 01:58:54.150716 kubelet[2853]: I1101 01:58:54.150675 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/088192a6-ad05-483b-b9cf-bbb1b8b9bbb7-config-volume\") pod \"coredns-668d6bf9bc-7slft\" (UID: \"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7\") " pod="kube-system/coredns-668d6bf9bc-7slft" Nov 1 01:58:54.150716 kubelet[2853]: I1101 01:58:54.150704 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdj45\" (UniqueName: \"kubernetes.io/projected/fdce623b-f498-4a86-b9d7-a71f9568f87d-kube-api-access-mdj45\") pod \"calico-kube-controllers-549d498fd-4kbzk\" (UID: \"fdce623b-f498-4a86-b9d7-a71f9568f87d\") " pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" Nov 1 01:58:54.150716 kubelet[2853]: I1101 01:58:54.150727 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69be427e-9188-4acc-abfe-94d74b48ccf9-config-volume\") pod \"coredns-668d6bf9bc-kmvw8\" (UID: \"69be427e-9188-4acc-abfe-94d74b48ccf9\") " pod="kube-system/coredns-668d6bf9bc-kmvw8" Nov 1 01:58:54.151366 kubelet[2853]: I1101 01:58:54.150749 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-backend-key-pair\") pod \"whisker-7bf9fcd996-mjzn9\" (UID: \"e7a4a03d-2310-48ea-8592-8f73cd72194a\") " pod="calico-system/whisker-7bf9fcd996-mjzn9" Nov 1 01:58:54.151366 kubelet[2853]: I1101 01:58:54.150772 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xmr\" (UniqueName: \"kubernetes.io/projected/53035908-eec7-4eef-b118-526472e0fe2d-kube-api-access-d8xmr\") pod \"calico-apiserver-7589849df-8r8qj\" (UID: \"53035908-eec7-4eef-b118-526472e0fe2d\") " pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" Nov 1 01:58:54.151366 kubelet[2853]: I1101 01:58:54.150798 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2ab813-9398-4622-9019-515028818713-config\") pod \"goldmane-666569f655-qxr6w\" (UID: \"7d2ab813-9398-4622-9019-515028818713\") " pod="calico-system/goldmane-666569f655-qxr6w" Nov 1 01:58:54.151366 kubelet[2853]: I1101 01:58:54.150815 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr874\" (UniqueName: \"kubernetes.io/projected/ff93fa77-947d-41bd-9b0a-6912cba460eb-kube-api-access-qr874\") pod \"calico-apiserver-7589849df-tnvl5\" (UID: \"ff93fa77-947d-41bd-9b0a-6912cba460eb\") " pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" Nov 1 01:58:54.151366 kubelet[2853]: I1101 01:58:54.150838 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-ca-bundle\") pod \"whisker-7bf9fcd996-mjzn9\" (UID: \"e7a4a03d-2310-48ea-8592-8f73cd72194a\") " pod="calico-system/whisker-7bf9fcd996-mjzn9" Nov 1 01:58:54.151865 kubelet[2853]: I1101 01:58:54.150856 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtfj\" (UniqueName: \"kubernetes.io/projected/69be427e-9188-4acc-abfe-94d74b48ccf9-kube-api-access-4vtfj\") pod \"coredns-668d6bf9bc-kmvw8\" (UID: \"69be427e-9188-4acc-abfe-94d74b48ccf9\") " pod="kube-system/coredns-668d6bf9bc-kmvw8" Nov 1 01:58:54.151865 kubelet[2853]: I1101 01:58:54.150876 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff93fa77-947d-41bd-9b0a-6912cba460eb-calico-apiserver-certs\") pod \"calico-apiserver-7589849df-tnvl5\" (UID: \"ff93fa77-947d-41bd-9b0a-6912cba460eb\") " pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" Nov 1 01:58:54.151865 kubelet[2853]: I1101 01:58:54.150895 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdce623b-f498-4a86-b9d7-a71f9568f87d-tigera-ca-bundle\") pod \"calico-kube-controllers-549d498fd-4kbzk\" (UID: \"fdce623b-f498-4a86-b9d7-a71f9568f87d\") " pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" Nov 1 01:58:54.151865 kubelet[2853]: I1101 01:58:54.150914 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27q6d\" (UniqueName: \"kubernetes.io/projected/7d2ab813-9398-4622-9019-515028818713-kube-api-access-27q6d\") pod \"goldmane-666569f655-qxr6w\" (UID: \"7d2ab813-9398-4622-9019-515028818713\") " pod="calico-system/goldmane-666569f655-qxr6w" Nov 1 01:58:54.151865 kubelet[2853]: I1101 01:58:54.150960 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97j7d\" (UniqueName: \"kubernetes.io/projected/088192a6-ad05-483b-b9cf-bbb1b8b9bbb7-kube-api-access-97j7d\") pod \"coredns-668d6bf9bc-7slft\" (UID: \"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7\") " pod="kube-system/coredns-668d6bf9bc-7slft" Nov 1 01:58:54.152021 kubelet[2853]: I1101 01:58:54.150984 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scchh\" (UniqueName: \"kubernetes.io/projected/e7a4a03d-2310-48ea-8592-8f73cd72194a-kube-api-access-scchh\") pod \"whisker-7bf9fcd996-mjzn9\" (UID: \"e7a4a03d-2310-48ea-8592-8f73cd72194a\") " pod="calico-system/whisker-7bf9fcd996-mjzn9" Nov 1 01:58:54.152021 kubelet[2853]: I1101 01:58:54.151005 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53035908-eec7-4eef-b118-526472e0fe2d-calico-apiserver-certs\") pod \"calico-apiserver-7589849df-8r8qj\" (UID: \"53035908-eec7-4eef-b118-526472e0fe2d\") " pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" Nov 1 01:58:54.152021 kubelet[2853]: I1101 01:58:54.151028 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7d2ab813-9398-4622-9019-515028818713-goldmane-key-pair\") pod \"goldmane-666569f655-qxr6w\" (UID: \"7d2ab813-9398-4622-9019-515028818713\") " pod="calico-system/goldmane-666569f655-qxr6w" Nov 1 01:58:54.394041 containerd[1626]: time="2025-11-01T01:58:54.393305661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qxr6w,Uid:7d2ab813-9398-4622-9019-515028818713,Namespace:calico-system,Attempt:0,}" Nov 1 01:58:54.396669 containerd[1626]: time="2025-11-01T01:58:54.395742135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmvw8,Uid:69be427e-9188-4acc-abfe-94d74b48ccf9,Namespace:kube-system,Attempt:0,}" Nov 1 01:58:54.396669 containerd[1626]: time="2025-11-01T01:58:54.396277230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-8r8qj,Uid:53035908-eec7-4eef-b118-526472e0fe2d,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:58:54.396961 containerd[1626]: time="2025-11-01T01:58:54.396923695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf9fcd996-mjzn9,Uid:e7a4a03d-2310-48ea-8592-8f73cd72194a,Namespace:calico-system,Attempt:0,}" Nov 1 01:58:54.397461 containerd[1626]: time="2025-11-01T01:58:54.397427914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7slft,Uid:088192a6-ad05-483b-b9cf-bbb1b8b9bbb7,Namespace:kube-system,Attempt:0,}" Nov 1 01:58:54.398374 containerd[1626]: time="2025-11-01T01:58:54.398336933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d498fd-4kbzk,Uid:fdce623b-f498-4a86-b9d7-a71f9568f87d,Namespace:calico-system,Attempt:0,}" Nov 1 01:58:54.406394 containerd[1626]: time="2025-11-01T01:58:54.406288752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-tnvl5,Uid:ff93fa77-947d-41bd-9b0a-6912cba460eb,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:58:54.746173 containerd[1626]: time="2025-11-01T01:58:54.745841518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5qvt,Uid:4caf741f-c22d-4e76-9e9d-18f81ca6bba2,Namespace:calico-system,Attempt:0,}" Nov 1 01:58:54.750449 containerd[1626]: time="2025-11-01T01:58:54.750164519Z" level=error msg="Failed to destroy network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.773780 containerd[1626]: time="2025-11-01T01:58:54.773424409Z" level=error msg="encountered an error cleaning up failed sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.780674 containerd[1626]: time="2025-11-01T01:58:54.780606897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf9fcd996-mjzn9,Uid:e7a4a03d-2310-48ea-8592-8f73cd72194a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.789950 kubelet[2853]: E1101 01:58:54.788605 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.789950 kubelet[2853]: E1101 01:58:54.788739 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bf9fcd996-mjzn9" Nov 1 01:58:54.789950 kubelet[2853]: E1101 01:58:54.788772 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bf9fcd996-mjzn9" Nov 1 01:58:54.790278 kubelet[2853]: E1101 01:58:54.788838 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bf9fcd996-mjzn9_calico-system(e7a4a03d-2310-48ea-8592-8f73cd72194a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bf9fcd996-mjzn9_calico-system(e7a4a03d-2310-48ea-8592-8f73cd72194a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bf9fcd996-mjzn9" podUID="e7a4a03d-2310-48ea-8592-8f73cd72194a" Nov 1 01:58:54.836057 containerd[1626]: time="2025-11-01T01:58:54.835999065Z" level=error msg="Failed to destroy network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.836956 containerd[1626]: time="2025-11-01T01:58:54.836928882Z" level=error msg="encountered an error cleaning up failed sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.837021 containerd[1626]: time="2025-11-01T01:58:54.836986864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-tnvl5,Uid:ff93fa77-947d-41bd-9b0a-6912cba460eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.837448 kubelet[2853]: E1101 01:58:54.837272 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.837448 kubelet[2853]: E1101 01:58:54.837354 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" Nov 1 01:58:54.837448 kubelet[2853]: E1101 01:58:54.837379 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" Nov 1 01:58:54.837934 kubelet[2853]: E1101 01:58:54.837658 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7589849df-tnvl5_calico-apiserver(ff93fa77-947d-41bd-9b0a-6912cba460eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7589849df-tnvl5_calico-apiserver(ff93fa77-947d-41bd-9b0a-6912cba460eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:58:54.843586 containerd[1626]: time="2025-11-01T01:58:54.843382041Z" level=error msg="Failed to destroy network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.844059 containerd[1626]: time="2025-11-01T01:58:54.844027213Z" level=error msg="encountered an error cleaning up failed sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.844363 containerd[1626]: time="2025-11-01T01:58:54.844334221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7slft,Uid:088192a6-ad05-483b-b9cf-bbb1b8b9bbb7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.845505 kubelet[2853]: E1101 01:58:54.845469 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.846713 kubelet[2853]: E1101 01:58:54.845826 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7slft" Nov 1 01:58:54.846713 kubelet[2853]: E1101 01:58:54.845857 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7slft" Nov 1 01:58:54.846713 kubelet[2853]: E1101 01:58:54.845923 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7slft_kube-system(088192a6-ad05-483b-b9cf-bbb1b8b9bbb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7slft_kube-system(088192a6-ad05-483b-b9cf-bbb1b8b9bbb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7slft" podUID="088192a6-ad05-483b-b9cf-bbb1b8b9bbb7" Nov 1 01:58:54.861161 containerd[1626]: time="2025-11-01T01:58:54.860981743Z" level=error msg="Failed to destroy network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.861413 containerd[1626]: time="2025-11-01T01:58:54.861344573Z" level=error msg="encountered an error cleaning up failed sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.861574 containerd[1626]: time="2025-11-01T01:58:54.861546061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-8r8qj,Uid:53035908-eec7-4eef-b118-526472e0fe2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.861983 kubelet[2853]: E1101 01:58:54.861844 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.861983 kubelet[2853]: E1101 01:58:54.861911 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" Nov 1 01:58:54.861983 kubelet[2853]: E1101 01:58:54.861934 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" Nov 1 01:58:54.862231 kubelet[2853]: E1101 01:58:54.861982 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7589849df-8r8qj_calico-apiserver(53035908-eec7-4eef-b118-526472e0fe2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7589849df-8r8qj_calico-apiserver(53035908-eec7-4eef-b118-526472e0fe2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:58:54.866701 containerd[1626]: time="2025-11-01T01:58:54.866667698Z" level=error msg="Failed to destroy network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.867328 containerd[1626]: time="2025-11-01T01:58:54.867295424Z" level=error msg="encountered an error cleaning up failed sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.867490 containerd[1626]: time="2025-11-01T01:58:54.867449389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qxr6w,Uid:7d2ab813-9398-4622-9019-515028818713,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.867779 kubelet[2853]: E1101 01:58:54.867741 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.869012 kubelet[2853]: E1101 01:58:54.867798 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qxr6w" Nov 1 01:58:54.869012 kubelet[2853]: E1101 01:58:54.867823 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qxr6w" Nov 1 01:58:54.869012 kubelet[2853]: E1101 01:58:54.867870 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qxr6w_calico-system(7d2ab813-9398-4622-9019-515028818713)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qxr6w_calico-system(7d2ab813-9398-4622-9019-515028818713)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:58:54.874154 containerd[1626]: time="2025-11-01T01:58:54.873481151Z" level=error msg="Failed to destroy network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.874154 containerd[1626]: time="2025-11-01T01:58:54.874017837Z" level=error msg="encountered an error cleaning up failed sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.874267 containerd[1626]: time="2025-11-01T01:58:54.874160225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmvw8,Uid:69be427e-9188-4acc-abfe-94d74b48ccf9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.875322 kubelet[2853]: E1101 01:58:54.874471 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.875322 kubelet[2853]: E1101 01:58:54.874520 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kmvw8" Nov 1 01:58:54.875322 kubelet[2853]: E1101 01:58:54.874543 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kmvw8" Nov 1 01:58:54.875458 kubelet[2853]: E1101 01:58:54.874590 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kmvw8_kube-system(69be427e-9188-4acc-abfe-94d74b48ccf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kmvw8_kube-system(69be427e-9188-4acc-abfe-94d74b48ccf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kmvw8" podUID="69be427e-9188-4acc-abfe-94d74b48ccf9" Nov 1 01:58:54.877749 containerd[1626]: time="2025-11-01T01:58:54.877719663Z" level=error msg="Failed to destroy network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.878836 containerd[1626]: time="2025-11-01T01:58:54.878794113Z" level=error msg="encountered an error cleaning up failed sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.878977 containerd[1626]: time="2025-11-01T01:58:54.878945015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d498fd-4kbzk,Uid:fdce623b-f498-4a86-b9d7-a71f9568f87d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.879284 kubelet[2853]: E1101 01:58:54.879256 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.879824 kubelet[2853]: E1101 01:58:54.879379 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" Nov 1 01:58:54.879824 kubelet[2853]: E1101 01:58:54.879409 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" Nov 1 01:58:54.879824 kubelet[2853]: E1101 01:58:54.879483 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-549d498fd-4kbzk_calico-system(fdce623b-f498-4a86-b9d7-a71f9568f87d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-549d498fd-4kbzk_calico-system(fdce623b-f498-4a86-b9d7-a71f9568f87d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:58:54.946932 containerd[1626]: time="2025-11-01T01:58:54.946794153Z" level=error msg="Failed to destroy network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.947523 containerd[1626]: time="2025-11-01T01:58:54.947352287Z" level=error msg="encountered an error cleaning up failed sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.947523 containerd[1626]: time="2025-11-01T01:58:54.947408278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5qvt,Uid:4caf741f-c22d-4e76-9e9d-18f81ca6bba2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.948073 kubelet[2853]: E1101 01:58:54.947977 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:54.948157 kubelet[2853]: E1101 01:58:54.948113 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:54.948212 kubelet[2853]: E1101 01:58:54.948189 2853 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b5qvt" Nov 1 01:58:54.948355 kubelet[2853]: E1101 01:58:54.948303 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:55.044116 kubelet[2853]: I1101 01:58:55.043901 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:58:55.049205 kubelet[2853]: I1101 01:58:55.049040 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:58:55.060356 kubelet[2853]: I1101 01:58:55.059667 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:58:55.067619 kubelet[2853]: I1101 01:58:55.066805 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:58:55.083832 kubelet[2853]: I1101 01:58:55.082876 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:58:55.088695 kubelet[2853]: I1101 01:58:55.088675 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:58:55.100606 containerd[1626]: time="2025-11-01T01:58:55.099682853Z" level=info msg="StopPodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\"" Nov 1 01:58:55.101048 containerd[1626]: time="2025-11-01T01:58:55.101024550Z" level=info msg="StopPodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\"" Nov 1 01:58:55.101637 containerd[1626]: time="2025-11-01T01:58:55.101618122Z" level=info msg="Ensure that sandbox d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7 in task-service has been cleanup successfully" Nov 1 01:58:55.101784 containerd[1626]: time="2025-11-01T01:58:55.101743777Z" level=info msg="StopPodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\"" Nov 1 01:58:55.102215 containerd[1626]: time="2025-11-01T01:58:55.101938428Z" level=info msg="Ensure that sandbox b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d in task-service has been cleanup successfully" Nov 1 01:58:55.102644 containerd[1626]: time="2025-11-01T01:58:55.102603930Z" level=info msg="StopPodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\"" Nov 1 01:58:55.102825 containerd[1626]: time="2025-11-01T01:58:55.102805969Z" level=info msg="Ensure that sandbox fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1 in task-service has been cleanup successfully" Nov 1 01:58:55.102921 containerd[1626]: time="2025-11-01T01:58:55.102904917Z" level=info msg="StopPodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\"" Nov 1 01:58:55.103077 containerd[1626]: time="2025-11-01T01:58:55.103059440Z" level=info msg="Ensure that sandbox 6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d in task-service has been cleanup successfully" Nov 1 01:58:55.103861 containerd[1626]: time="2025-11-01T01:58:55.101693286Z" level=info msg="Ensure that sandbox 990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918 in task-service has been cleanup successfully" Nov 1 01:58:55.105293 containerd[1626]: time="2025-11-01T01:58:55.104833145Z" level=info msg="StopPodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\"" Nov 1 01:58:55.105623 containerd[1626]: time="2025-11-01T01:58:55.105600906Z" level=info msg="Ensure that sandbox b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35 in task-service has been cleanup successfully" Nov 1 01:58:55.106022 kubelet[2853]: I1101 01:58:55.105576 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:58:55.106325 containerd[1626]: time="2025-11-01T01:58:55.106288384Z" level=info msg="StopPodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\"" Nov 1 01:58:55.106623 containerd[1626]: time="2025-11-01T01:58:55.106570891Z" level=info msg="Ensure that sandbox 59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff in task-service has been cleanup successfully" Nov 1 01:58:55.117198 kubelet[2853]: I1101 01:58:55.117127 2853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:58:55.118474 containerd[1626]: time="2025-11-01T01:58:55.118277278Z" level=info msg="StopPodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\"" Nov 1 01:58:55.120948 containerd[1626]: time="2025-11-01T01:58:55.120687698Z" level=info msg="Ensure that sandbox 6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4 in task-service has been cleanup successfully" Nov 1 01:58:55.244522 containerd[1626]: time="2025-11-01T01:58:55.244362730Z" level=error msg="StopPodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" failed" error="failed to destroy network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.244856 kubelet[2853]: E1101 01:58:55.244706 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:58:55.255275 containerd[1626]: time="2025-11-01T01:58:55.255124622Z" level=error msg="StopPodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" failed" error="failed to destroy network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.256123 kubelet[2853]: E1101 01:58:55.255903 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:58:55.269267 kubelet[2853]: E1101 01:58:55.255950 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d"} Nov 1 01:58:55.269371 kubelet[2853]: E1101 01:58:55.269296 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7a4a03d-2310-48ea-8592-8f73cd72194a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.269371 kubelet[2853]: E1101 01:58:55.269331 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7a4a03d-2310-48ea-8592-8f73cd72194a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bf9fcd996-mjzn9" podUID="e7a4a03d-2310-48ea-8592-8f73cd72194a" Nov 1 01:58:55.269371 kubelet[2853]: E1101 01:58:55.253071 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7"} Nov 1 01:58:55.269754 kubelet[2853]: E1101 01:58:55.269408 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdce623b-f498-4a86-b9d7-a71f9568f87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.269754 kubelet[2853]: E1101 01:58:55.269428 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdce623b-f498-4a86-b9d7-a71f9568f87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:58:55.269869 containerd[1626]: time="2025-11-01T01:58:55.269592497Z" level=error msg="StopPodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" failed" error="failed to destroy network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.272208 kubelet[2853]: E1101 01:58:55.269890 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:58:55.272208 kubelet[2853]: E1101 01:58:55.269931 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918"} Nov 1 01:58:55.272208 kubelet[2853]: E1101 01:58:55.269968 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53035908-eec7-4eef-b118-526472e0fe2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.272208 kubelet[2853]: E1101 01:58:55.269990 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53035908-eec7-4eef-b118-526472e0fe2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:58:55.277008 containerd[1626]: time="2025-11-01T01:58:55.276958442Z" level=error msg="StopPodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" failed" error="failed to destroy network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.277962 kubelet[2853]: E1101 01:58:55.277210 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:58:55.278037 kubelet[2853]: E1101 01:58:55.277979 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1"} Nov 1 01:58:55.278037 kubelet[2853]: E1101 01:58:55.278019 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff93fa77-947d-41bd-9b0a-6912cba460eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.278899 kubelet[2853]: E1101 01:58:55.278044 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff93fa77-947d-41bd-9b0a-6912cba460eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:58:55.281353 containerd[1626]: time="2025-11-01T01:58:55.281182421Z" level=error msg="StopPodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" failed" error="failed to destroy network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.282356 kubelet[2853]: E1101 01:58:55.282296 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:58:55.282356 kubelet[2853]: E1101 01:58:55.282334 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff"} Nov 1 01:58:55.282480 kubelet[2853]: E1101 01:58:55.282376 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69be427e-9188-4acc-abfe-94d74b48ccf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.282480 kubelet[2853]: E1101 01:58:55.282416 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69be427e-9188-4acc-abfe-94d74b48ccf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kmvw8" podUID="69be427e-9188-4acc-abfe-94d74b48ccf9" Nov 1 01:58:55.284364 containerd[1626]: time="2025-11-01T01:58:55.284285578Z" level=error msg="StopPodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" failed" error="failed to destroy network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.284490 kubelet[2853]: E1101 01:58:55.284442 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:58:55.284490 kubelet[2853]: E1101 01:58:55.284485 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35"} Nov 1 01:58:55.284577 kubelet[2853]: E1101 01:58:55.284524 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.284577 kubelet[2853]: E1101 01:58:55.284557 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4caf741f-c22d-4e76-9e9d-18f81ca6bba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:58:55.286995 containerd[1626]: time="2025-11-01T01:58:55.286939586Z" level=error msg="StopPodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" failed" error="failed to destroy network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.287133 kubelet[2853]: E1101 01:58:55.287101 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:58:55.287229 kubelet[2853]: E1101 01:58:55.287185 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4"} Nov 1 01:58:55.287267 kubelet[2853]: E1101 01:58:55.287227 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d2ab813-9398-4622-9019-515028818713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.287325 kubelet[2853]: E1101 01:58:55.287254 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d2ab813-9398-4622-9019-515028818713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:58:55.288367 containerd[1626]: time="2025-11-01T01:58:55.288307280Z" level=error msg="StopPodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" failed" error="failed to destroy network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:58:55.288518 kubelet[2853]: E1101 01:58:55.288487 2853 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:58:55.288551 kubelet[2853]: E1101 01:58:55.288518 2853 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d"} Nov 1 01:58:55.288580 kubelet[2853]: E1101 01:58:55.288554 2853 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:58:55.288628 kubelet[2853]: E1101 01:58:55.288574 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7slft" podUID="088192a6-ad05-483b-b9cf-bbb1b8b9bbb7" Nov 1 01:59:01.931703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252240213.mount: Deactivated successfully. Nov 1 01:59:01.979273 containerd[1626]: time="2025-11-01T01:59:01.978973621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 01:59:01.987108 containerd[1626]: time="2025-11-01T01:59:01.986189193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.932129459s" Nov 1 01:59:01.987108 containerd[1626]: time="2025-11-01T01:59:01.986231150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:59:02.024260 containerd[1626]: time="2025-11-01T01:59:02.022675506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:59:02.093313 containerd[1626]: time="2025-11-01T01:59:02.091443331Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:59:02.101382 containerd[1626]: time="2025-11-01T01:59:02.101336199Z" level=info msg="CreateContainer within sandbox \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:59:02.104183 containerd[1626]: time="2025-11-01T01:59:02.103373195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:59:02.177899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292557328.mount: Deactivated successfully. Nov 1 01:59:02.192725 containerd[1626]: time="2025-11-01T01:59:02.192512344Z" level=info msg="CreateContainer within sandbox \"d98288c6f12baf3d46631b0b0a254b2bd34692451a68856fc03f7c6a42939fb6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"823c94cc33e0765c881888e9aca563be56d4f5bfe2eae679594ab5f5fd74f123\"" Nov 1 01:59:02.195007 containerd[1626]: time="2025-11-01T01:59:02.194965944Z" level=info msg="StartContainer for \"823c94cc33e0765c881888e9aca563be56d4f5bfe2eae679594ab5f5fd74f123\"" Nov 1 01:59:02.332532 containerd[1626]: time="2025-11-01T01:59:02.331424387Z" level=info msg="StartContainer for \"823c94cc33e0765c881888e9aca563be56d4f5bfe2eae679594ab5f5fd74f123\" returns successfully" Nov 1 01:59:02.465705 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:59:02.472282 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:59:02.689411 containerd[1626]: time="2025-11-01T01:59:02.686857617Z" level=info msg="StopPodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\"" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.829 [INFO][4020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.833 [INFO][4020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" iface="eth0" netns="/var/run/netns/cni-4814f65f-5c14-4abb-3ade-2ad0d26f8774" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.834 [INFO][4020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" iface="eth0" netns="/var/run/netns/cni-4814f65f-5c14-4abb-3ade-2ad0d26f8774" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.835 [INFO][4020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" iface="eth0" netns="/var/run/netns/cni-4814f65f-5c14-4abb-3ade-2ad0d26f8774" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.835 [INFO][4020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.835 [INFO][4020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.971 [INFO][4028] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.974 [INFO][4028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.974 [INFO][4028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.995 [WARNING][4028] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.995 [INFO][4028] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:02.998 [INFO][4028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:03.022993 containerd[1626]: 2025-11-01 01:59:03.010 [INFO][4020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:03.022993 containerd[1626]: time="2025-11-01T01:59:03.022637414Z" level=info msg="TearDown network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" successfully" Nov 1 01:59:03.022993 containerd[1626]: time="2025-11-01T01:59:03.022675584Z" level=info msg="StopPodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" returns successfully" Nov 1 01:59:03.030756 systemd[1]: run-netns-cni\x2d4814f65f\x2d5c14\x2d4abb\x2d3ade\x2d2ad0d26f8774.mount: Deactivated successfully. Nov 1 01:59:03.147776 kubelet[2853]: I1101 01:59:03.147663 2853 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-backend-key-pair\") pod \"e7a4a03d-2310-48ea-8592-8f73cd72194a\" (UID: \"e7a4a03d-2310-48ea-8592-8f73cd72194a\") " Nov 1 01:59:03.149033 kubelet[2853]: I1101 01:59:03.147839 2853 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scchh\" (UniqueName: \"kubernetes.io/projected/e7a4a03d-2310-48ea-8592-8f73cd72194a-kube-api-access-scchh\") pod \"e7a4a03d-2310-48ea-8592-8f73cd72194a\" (UID: \"e7a4a03d-2310-48ea-8592-8f73cd72194a\") " Nov 1 01:59:03.149033 kubelet[2853]: I1101 01:59:03.147925 2853 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-ca-bundle\") pod \"e7a4a03d-2310-48ea-8592-8f73cd72194a\" (UID: \"e7a4a03d-2310-48ea-8592-8f73cd72194a\") " Nov 1 01:59:03.173041 systemd[1]: var-lib-kubelet-pods-e7a4a03d\x2d2310\x2d48ea\x2d8592\x2d8f73cd72194a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:59:03.178052 systemd[1]: var-lib-kubelet-pods-e7a4a03d\x2d2310\x2d48ea\x2d8592\x2d8f73cd72194a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dscchh.mount: Deactivated successfully. Nov 1 01:59:03.184022 kubelet[2853]: I1101 01:59:03.182947 2853 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e7a4a03d-2310-48ea-8592-8f73cd72194a" (UID: "e7a4a03d-2310-48ea-8592-8f73cd72194a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:59:03.184623 kubelet[2853]: I1101 01:59:03.183010 2853 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a4a03d-2310-48ea-8592-8f73cd72194a-kube-api-access-scchh" (OuterVolumeSpecName: "kube-api-access-scchh") pod "e7a4a03d-2310-48ea-8592-8f73cd72194a" (UID: "e7a4a03d-2310-48ea-8592-8f73cd72194a"). InnerVolumeSpecName "kube-api-access-scchh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:59:03.184794 kubelet[2853]: I1101 01:59:03.184774 2853 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e7a4a03d-2310-48ea-8592-8f73cd72194a" (UID: "e7a4a03d-2310-48ea-8592-8f73cd72194a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:59:03.250409 kubelet[2853]: I1101 01:59:03.250354 2853 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-scchh\" (UniqueName: \"kubernetes.io/projected/e7a4a03d-2310-48ea-8592-8f73cd72194a-kube-api-access-scchh\") on node \"srv-gnbw4.gb1.brightbox.com\" DevicePath \"\"" Nov 1 01:59:03.250409 kubelet[2853]: I1101 01:59:03.250397 2853 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-ca-bundle\") on node \"srv-gnbw4.gb1.brightbox.com\" DevicePath \"\"" Nov 1 01:59:03.250409 kubelet[2853]: I1101 01:59:03.250410 2853 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7a4a03d-2310-48ea-8592-8f73cd72194a-whisker-backend-key-pair\") on node \"srv-gnbw4.gb1.brightbox.com\" DevicePath \"\"" Nov 1 01:59:03.268331 kubelet[2853]: I1101 01:59:03.244057 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qfn6x" podStartSLOduration=2.051523721 podStartE2EDuration="19.241046804s" podCreationTimestamp="2025-11-01 01:58:44 +0000 UTC" firstStartedPulling="2025-11-01 01:58:44.799183654 +0000 UTC m=+23.203571981" lastFinishedPulling="2025-11-01 01:59:01.988706733 +0000 UTC m=+40.393095064" observedRunningTime="2025-11-01 01:59:03.235307478 +0000 UTC m=+41.639695835" watchObservedRunningTime="2025-11-01 01:59:03.241046804 +0000 UTC m=+41.645435214" Nov 1 01:59:03.365086 systemd-journald[1173]: Under memory pressure, flushing caches. Nov 1 01:59:03.331649 systemd-resolved[1509]: Under memory pressure, flushing caches. Nov 1 01:59:03.332081 systemd-resolved[1509]: Flushed all caches. Nov 1 01:59:03.552611 kubelet[2853]: I1101 01:59:03.552441 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62809712-0e36-4839-9d03-798eca9b1c78-whisker-ca-bundle\") pod \"whisker-5fcc756c94-8k58z\" (UID: \"62809712-0e36-4839-9d03-798eca9b1c78\") " pod="calico-system/whisker-5fcc756c94-8k58z" Nov 1 01:59:03.552611 kubelet[2853]: I1101 01:59:03.552541 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgh4c\" (UniqueName: \"kubernetes.io/projected/62809712-0e36-4839-9d03-798eca9b1c78-kube-api-access-pgh4c\") pod \"whisker-5fcc756c94-8k58z\" (UID: \"62809712-0e36-4839-9d03-798eca9b1c78\") " pod="calico-system/whisker-5fcc756c94-8k58z" Nov 1 01:59:03.552611 kubelet[2853]: I1101 01:59:03.552581 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/62809712-0e36-4839-9d03-798eca9b1c78-whisker-backend-key-pair\") pod \"whisker-5fcc756c94-8k58z\" (UID: \"62809712-0e36-4839-9d03-798eca9b1c78\") " pod="calico-system/whisker-5fcc756c94-8k58z" Nov 1 01:59:03.711496 containerd[1626]: time="2025-11-01T01:59:03.711372258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcc756c94-8k58z,Uid:62809712-0e36-4839-9d03-798eca9b1c78,Namespace:calico-system,Attempt:0,}" Nov 1 01:59:03.760015 kubelet[2853]: I1101 01:59:03.759800 2853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a4a03d-2310-48ea-8592-8f73cd72194a" path="/var/lib/kubelet/pods/e7a4a03d-2310-48ea-8592-8f73cd72194a/volumes" Nov 1 01:59:03.899618 systemd-networkd[1267]: caliab99e76554e: Link UP Nov 1 01:59:03.901050 systemd-networkd[1267]: caliab99e76554e: Gained carrier Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.779 [INFO][4074] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.799 [INFO][4074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0 whisker-5fcc756c94- calico-system 62809712-0e36-4839-9d03-798eca9b1c78 873 0 2025-11-01 01:59:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fcc756c94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com whisker-5fcc756c94-8k58z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliab99e76554e [] [] }} ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.799 [INFO][4074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.836 [INFO][4085] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" HandleID="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.836 [INFO][4085] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" HandleID="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"whisker-5fcc756c94-8k58z", "timestamp":"2025-11-01 01:59:03.836077892 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.836 [INFO][4085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.836 [INFO][4085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.836 [INFO][4085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.845 [INFO][4085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.856 [INFO][4085] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.862 [INFO][4085] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.865 [INFO][4085] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.867 [INFO][4085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.868 [INFO][4085] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.869 [INFO][4085] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.874 [INFO][4085] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.880 [INFO][4085] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.65/26] block=192.168.43.64/26 handle="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.881 [INFO][4085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.65/26] handle="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.881 [INFO][4085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:03.921710 containerd[1626]: 2025-11-01 01:59:03.881 [INFO][4085] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.65/26] IPv6=[] ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" HandleID="k8s-pod-network.b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.922619 containerd[1626]: 2025-11-01 01:59:03.884 [INFO][4074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0", GenerateName:"whisker-5fcc756c94-", Namespace:"calico-system", SelfLink:"", UID:"62809712-0e36-4839-9d03-798eca9b1c78", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 59, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcc756c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5fcc756c94-8k58z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliab99e76554e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:03.922619 containerd[1626]: 2025-11-01 01:59:03.885 [INFO][4074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.65/32] ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.922619 containerd[1626]: 2025-11-01 01:59:03.885 [INFO][4074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab99e76554e ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.922619 containerd[1626]: 2025-11-01 01:59:03.902 [INFO][4074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.922619 containerd[1626]: 2025-11-01 01:59:03.903 [INFO][4074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0", GenerateName:"whisker-5fcc756c94-", Namespace:"calico-system", SelfLink:"", UID:"62809712-0e36-4839-9d03-798eca9b1c78", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 59, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcc756c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca", Pod:"whisker-5fcc756c94-8k58z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliab99e76554e", MAC:"42:d2:43:7b:73:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:03.922619 containerd[1626]: 2025-11-01 01:59:03.917 [INFO][4074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca" Namespace="calico-system" Pod="whisker-5fcc756c94-8k58z" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--5fcc756c94--8k58z-eth0" Nov 1 01:59:03.942951 systemd[1]: run-containerd-runc-k8s.io-823c94cc33e0765c881888e9aca563be56d4f5bfe2eae679594ab5f5fd74f123-runc.p2JkaX.mount: Deactivated successfully. Nov 1 01:59:03.968569 containerd[1626]: time="2025-11-01T01:59:03.968379741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:03.968569 containerd[1626]: time="2025-11-01T01:59:03.968457517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:03.970420 containerd[1626]: time="2025-11-01T01:59:03.968773208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:03.971476 containerd[1626]: time="2025-11-01T01:59:03.971413571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:04.058916 containerd[1626]: time="2025-11-01T01:59:04.057850082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcc756c94-8k58z,Uid:62809712-0e36-4839-9d03-798eca9b1c78,Namespace:calico-system,Attempt:0,} returns sandbox id \"b827e3ec3d46ca4c8bb5db928ae58f43cb669b53d45855fa16cac02a1c108cca\"" Nov 1 01:59:04.063103 containerd[1626]: time="2025-11-01T01:59:04.063066590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:59:04.400084 containerd[1626]: time="2025-11-01T01:59:04.399984637Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:04.412311 containerd[1626]: time="2025-11-01T01:59:04.402907564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:59:04.413348 containerd[1626]: time="2025-11-01T01:59:04.403175055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:59:04.419966 kubelet[2853]: E1101 01:59:04.417007 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:04.419966 kubelet[2853]: E1101 01:59:04.419912 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:04.429947 kubelet[2853]: E1101 01:59:04.429704 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9caf0efd36747098a83dd07c388322c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:04.434239 containerd[1626]: time="2025-11-01T01:59:04.434038478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:59:04.752278 containerd[1626]: time="2025-11-01T01:59:04.751526576Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:04.753576 containerd[1626]: time="2025-11-01T01:59:04.752949252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:59:04.753576 containerd[1626]: time="2025-11-01T01:59:04.753193721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:59:04.753805 kubelet[2853]: E1101 01:59:04.753657 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:04.753805 kubelet[2853]: E1101 01:59:04.753765 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:04.755275 kubelet[2853]: E1101 01:59:04.754044 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:04.755805 kubelet[2853]: E1101 01:59:04.755410 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 01:59:05.210510 kubelet[2853]: E1101 01:59:05.210347 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 01:59:05.391614 systemd-journald[1173]: Under memory pressure, flushing caches. Nov 1 01:59:05.379874 systemd-resolved[1509]: Under memory pressure, flushing caches. Nov 1 01:59:05.379917 systemd-resolved[1509]: Flushed all caches. Nov 1 01:59:05.744883 containerd[1626]: time="2025-11-01T01:59:05.744815536Z" level=info msg="StopPodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\"" Nov 1 01:59:05.770117 systemd-networkd[1267]: caliab99e76554e: Gained IPv6LL Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.835 [INFO][4311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.836 [INFO][4311] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" iface="eth0" netns="/var/run/netns/cni-d7b878a3-302e-8c1a-79e2-8d663e080575" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.838 [INFO][4311] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" iface="eth0" netns="/var/run/netns/cni-d7b878a3-302e-8c1a-79e2-8d663e080575" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.838 [INFO][4311] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" iface="eth0" netns="/var/run/netns/cni-d7b878a3-302e-8c1a-79e2-8d663e080575" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.838 [INFO][4311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.838 [INFO][4311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.866 [INFO][4318] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.867 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.867 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.878 [WARNING][4318] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.878 [INFO][4318] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.881 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:05.894221 containerd[1626]: 2025-11-01 01:59:05.886 [INFO][4311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:05.896614 containerd[1626]: time="2025-11-01T01:59:05.895100505Z" level=info msg="TearDown network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" successfully" Nov 1 01:59:05.896614 containerd[1626]: time="2025-11-01T01:59:05.895207864Z" level=info msg="StopPodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" returns successfully" Nov 1 01:59:05.894999 systemd[1]: run-netns-cni\x2dd7b878a3\x2d302e\x2d8c1a\x2d79e2\x2d8d663e080575.mount: Deactivated successfully. Nov 1 01:59:05.899612 containerd[1626]: time="2025-11-01T01:59:05.897828122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qxr6w,Uid:7d2ab813-9398-4622-9019-515028818713,Namespace:calico-system,Attempt:1,}" Nov 1 01:59:06.066298 systemd-networkd[1267]: calif148f172a45: Link UP Nov 1 01:59:06.066464 systemd-networkd[1267]: calif148f172a45: Gained carrier Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.939 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.954 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0 goldmane-666569f655- calico-system 7d2ab813-9398-4622-9019-515028818713 898 0 2025-11-01 01:58:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com goldmane-666569f655-qxr6w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif148f172a45 [] [] }} ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.954 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.996 [INFO][4338] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" HandleID="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.997 [INFO][4338] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" HandleID="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003339b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"goldmane-666569f655-qxr6w", "timestamp":"2025-11-01 01:59:05.996830955 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.997 [INFO][4338] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.997 [INFO][4338] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:05.997 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.005 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.014 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.022 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.027 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.031 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.032 [INFO][4338] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.035 [INFO][4338] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.044 [INFO][4338] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.059 [INFO][4338] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.66/26] block=192.168.43.64/26 handle="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.060 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.66/26] handle="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.060 [INFO][4338] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:06.085614 containerd[1626]: 2025-11-01 01:59:06.060 [INFO][4338] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.66/26] IPv6=[] ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" HandleID="k8s-pod-network.8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.087058 containerd[1626]: 2025-11-01 01:59:06.063 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7d2ab813-9398-4622-9019-515028818713", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-qxr6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif148f172a45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:06.087058 containerd[1626]: 2025-11-01 01:59:06.063 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.66/32] ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.087058 containerd[1626]: 2025-11-01 01:59:06.063 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif148f172a45 ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.087058 containerd[1626]: 2025-11-01 01:59:06.065 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.087058 containerd[1626]: 2025-11-01 01:59:06.065 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7d2ab813-9398-4622-9019-515028818713", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"8472be175764213a272de39ab7399973304268436b462a59487842144087429b", Pod:"goldmane-666569f655-qxr6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif148f172a45", MAC:"16:f9:7a:e2:49:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:06.087058 containerd[1626]: 2025-11-01 01:59:06.083 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8472be175764213a272de39ab7399973304268436b462a59487842144087429b" Namespace="calico-system" Pod="goldmane-666569f655-qxr6w" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:06.108955 containerd[1626]: time="2025-11-01T01:59:06.108615318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:06.108955 containerd[1626]: time="2025-11-01T01:59:06.108708063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:06.108955 containerd[1626]: time="2025-11-01T01:59:06.108733635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:06.108955 containerd[1626]: time="2025-11-01T01:59:06.108894715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:06.177682 containerd[1626]: time="2025-11-01T01:59:06.177568264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qxr6w,Uid:7d2ab813-9398-4622-9019-515028818713,Namespace:calico-system,Attempt:1,} returns sandbox id \"8472be175764213a272de39ab7399973304268436b462a59487842144087429b\"" Nov 1 01:59:06.181766 containerd[1626]: time="2025-11-01T01:59:06.181625752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:59:06.497966 containerd[1626]: time="2025-11-01T01:59:06.497842982Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:06.499318 containerd[1626]: time="2025-11-01T01:59:06.499089690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:59:06.499318 containerd[1626]: time="2025-11-01T01:59:06.499199512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:06.499746 kubelet[2853]: E1101 01:59:06.499615 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:06.501347 kubelet[2853]: E1101 01:59:06.499756 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:06.501347 kubelet[2853]: E1101 01:59:06.500190 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27q6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qxr6w_calico-system(7d2ab813-9398-4622-9019-515028818713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:06.503236 kubelet[2853]: E1101 01:59:06.501965 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:59:06.738510 containerd[1626]: time="2025-11-01T01:59:06.738131021Z" level=info msg="StopPodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\"" Nov 1 01:59:06.738510 containerd[1626]: time="2025-11-01T01:59:06.738190152Z" level=info msg="StopPodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\"" Nov 1 01:59:06.741660 containerd[1626]: time="2025-11-01T01:59:06.738149900Z" level=info msg="StopPodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\"" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.834 [INFO][4439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.835 [INFO][4439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" iface="eth0" netns="/var/run/netns/cni-0752881a-da92-6f5b-afec-a446cb32a717" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.835 [INFO][4439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" iface="eth0" netns="/var/run/netns/cni-0752881a-da92-6f5b-afec-a446cb32a717" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.835 [INFO][4439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" iface="eth0" netns="/var/run/netns/cni-0752881a-da92-6f5b-afec-a446cb32a717" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.835 [INFO][4439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.835 [INFO][4439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.918 [INFO][4458] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.918 [INFO][4458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.918 [INFO][4458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.930 [WARNING][4458] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.930 [INFO][4458] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.932 [INFO][4458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:06.938263 containerd[1626]: 2025-11-01 01:59:06.935 [INFO][4439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:06.939068 containerd[1626]: time="2025-11-01T01:59:06.938541380Z" level=info msg="TearDown network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" successfully" Nov 1 01:59:06.939068 containerd[1626]: time="2025-11-01T01:59:06.938768772Z" level=info msg="StopPodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" returns successfully" Nov 1 01:59:06.941890 systemd[1]: run-netns-cni\x2d0752881a\x2dda92\x2d6f5b\x2dafec\x2da446cb32a717.mount: Deactivated successfully. Nov 1 01:59:06.945486 containerd[1626]: time="2025-11-01T01:59:06.944943471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5qvt,Uid:4caf741f-c22d-4e76-9e9d-18f81ca6bba2,Namespace:calico-system,Attempt:1,}" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.891 [INFO][4448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.893 [INFO][4448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" iface="eth0" netns="/var/run/netns/cni-65d4e54c-4d34-0b4c-7055-ed6741e3ac27" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.893 [INFO][4448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" iface="eth0" netns="/var/run/netns/cni-65d4e54c-4d34-0b4c-7055-ed6741e3ac27" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.903 [INFO][4448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" iface="eth0" netns="/var/run/netns/cni-65d4e54c-4d34-0b4c-7055-ed6741e3ac27" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.903 [INFO][4448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.903 [INFO][4448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.950 [INFO][4471] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.951 [INFO][4471] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.951 [INFO][4471] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.965 [WARNING][4471] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.965 [INFO][4471] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.968 [INFO][4471] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:06.976785 containerd[1626]: 2025-11-01 01:59:06.973 [INFO][4448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:06.979172 containerd[1626]: time="2025-11-01T01:59:06.977509839Z" level=info msg="TearDown network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" successfully" Nov 1 01:59:06.979303 containerd[1626]: time="2025-11-01T01:59:06.979283099Z" level=info msg="StopPodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" returns successfully" Nov 1 01:59:06.981432 systemd[1]: run-netns-cni\x2d65d4e54c\x2d4d34\x2d0b4c\x2d7055\x2ded6741e3ac27.mount: Deactivated successfully. Nov 1 01:59:06.982229 containerd[1626]: time="2025-11-01T01:59:06.981770041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-tnvl5,Uid:ff93fa77-947d-41bd-9b0a-6912cba460eb,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.870 [INFO][4431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.871 [INFO][4431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" iface="eth0" netns="/var/run/netns/cni-b338c44c-bc9e-f327-97f9-76bf3079a90f" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.874 [INFO][4431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" iface="eth0" netns="/var/run/netns/cni-b338c44c-bc9e-f327-97f9-76bf3079a90f" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.877 [INFO][4431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" iface="eth0" netns="/var/run/netns/cni-b338c44c-bc9e-f327-97f9-76bf3079a90f" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.878 [INFO][4431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.879 [INFO][4431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.950 [INFO][4466] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.950 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.968 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.988 [WARNING][4466] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.988 [INFO][4466] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.990 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:06.998895 containerd[1626]: 2025-11-01 01:59:06.994 [INFO][4431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:07.000487 containerd[1626]: time="2025-11-01T01:59:06.998924177Z" level=info msg="TearDown network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" successfully" Nov 1 01:59:07.000487 containerd[1626]: time="2025-11-01T01:59:06.998948037Z" level=info msg="StopPodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" returns successfully" Nov 1 01:59:07.000558 containerd[1626]: time="2025-11-01T01:59:07.000478323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmvw8,Uid:69be427e-9188-4acc-abfe-94d74b48ccf9,Namespace:kube-system,Attempt:1,}" Nov 1 01:59:07.193741 systemd-networkd[1267]: cali22fa1a6a2df: Link UP Nov 1 01:59:07.196187 systemd-networkd[1267]: cali22fa1a6a2df: Gained carrier Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.042 [INFO][4482] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.061 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0 csi-node-driver- calico-system 4caf741f-c22d-4e76-9e9d-18f81ca6bba2 910 0 2025-11-01 01:58:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com csi-node-driver-b5qvt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22fa1a6a2df [] [] }} ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.062 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.111 [INFO][4514] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" HandleID="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.113 [INFO][4514] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" HandleID="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"csi-node-driver-b5qvt", "timestamp":"2025-11-01 01:59:07.111806977 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.113 [INFO][4514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.113 [INFO][4514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.113 [INFO][4514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.122 [INFO][4514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.131 [INFO][4514] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.137 [INFO][4514] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.139 [INFO][4514] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.144 [INFO][4514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.144 [INFO][4514] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.150 [INFO][4514] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0 Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.164 [INFO][4514] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.174 [INFO][4514] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.67/26] block=192.168.43.64/26 handle="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.175 [INFO][4514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.67/26] handle="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.175 [INFO][4514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:07.220348 containerd[1626]: 2025-11-01 01:59:07.175 [INFO][4514] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.67/26] IPv6=[] ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" HandleID="k8s-pod-network.b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.222185 containerd[1626]: 2025-11-01 01:59:07.180 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4caf741f-c22d-4e76-9e9d-18f81ca6bba2", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-b5qvt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22fa1a6a2df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:07.222185 containerd[1626]: 2025-11-01 01:59:07.181 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.67/32] ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.222185 containerd[1626]: 2025-11-01 01:59:07.181 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22fa1a6a2df ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.222185 containerd[1626]: 2025-11-01 01:59:07.197 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.222185 containerd[1626]: 2025-11-01 01:59:07.197 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4caf741f-c22d-4e76-9e9d-18f81ca6bba2", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0", Pod:"csi-node-driver-b5qvt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22fa1a6a2df", MAC:"fa:56:72:e7:18:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:07.222185 containerd[1626]: 2025-11-01 01:59:07.215 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0" Namespace="calico-system" Pod="csi-node-driver-b5qvt" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:07.246048 kubelet[2853]: E1101 01:59:07.239983 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:59:07.256170 systemd[1]: run-netns-cni\x2db338c44c\x2dbc9e\x2df327\x2d97f9\x2d76bf3079a90f.mount: Deactivated successfully. Nov 1 01:59:07.286410 containerd[1626]: time="2025-11-01T01:59:07.286269293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:07.287883 containerd[1626]: time="2025-11-01T01:59:07.286622424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:07.287883 containerd[1626]: time="2025-11-01T01:59:07.287810402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:07.289497 containerd[1626]: time="2025-11-01T01:59:07.288311464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:07.314174 systemd-networkd[1267]: calib8e05181185: Link UP Nov 1 01:59:07.315319 systemd-networkd[1267]: calib8e05181185: Gained carrier Nov 1 01:59:07.326250 systemd[1]: run-containerd-runc-k8s.io-b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0-runc.XYhB8P.mount: Deactivated successfully. Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.035 [INFO][4490] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.056 [INFO][4490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0 calico-apiserver-7589849df- calico-apiserver ff93fa77-947d-41bd-9b0a-6912cba460eb 912 0 2025-11-01 01:58:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7589849df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com calico-apiserver-7589849df-tnvl5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib8e05181185 [] [] }} ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.056 [INFO][4490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.132 [INFO][4516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" HandleID="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.133 [INFO][4516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" HandleID="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"calico-apiserver-7589849df-tnvl5", "timestamp":"2025-11-01 01:59:07.132446599 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.133 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.178 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.178 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.225 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.235 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.265 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.272 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.278 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.278 [INFO][4516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.280 [INFO][4516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2 Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.286 [INFO][4516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.302 [INFO][4516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.68/26] block=192.168.43.64/26 handle="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.302 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.68/26] handle="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.302 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:07.353443 containerd[1626]: 2025-11-01 01:59:07.302 [INFO][4516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.68/26] IPv6=[] ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" HandleID="k8s-pod-network.17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.355123 containerd[1626]: 2025-11-01 01:59:07.310 [INFO][4490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff93fa77-947d-41bd-9b0a-6912cba460eb", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7589849df-tnvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e05181185", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:07.355123 containerd[1626]: 2025-11-01 01:59:07.310 [INFO][4490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.68/32] ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.355123 containerd[1626]: 2025-11-01 01:59:07.310 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8e05181185 ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.355123 containerd[1626]: 2025-11-01 01:59:07.318 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.355123 containerd[1626]: 2025-11-01 01:59:07.318 [INFO][4490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff93fa77-947d-41bd-9b0a-6912cba460eb", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2", Pod:"calico-apiserver-7589849df-tnvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e05181185", MAC:"7a:be:c0:16:bb:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:07.355123 containerd[1626]: 2025-11-01 01:59:07.343 [INFO][4490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-tnvl5" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:07.416968 systemd-networkd[1267]: calia5b791ab724: Link UP Nov 1 01:59:07.417426 systemd-networkd[1267]: calia5b791ab724: Gained carrier Nov 1 01:59:07.427602 containerd[1626]: time="2025-11-01T01:59:07.427481575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5qvt,Uid:4caf741f-c22d-4e76-9e9d-18f81ca6bba2,Namespace:calico-system,Attempt:1,} returns sandbox id \"b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0\"" Nov 1 01:59:07.428323 containerd[1626]: time="2025-11-01T01:59:07.428248426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:07.428619 containerd[1626]: time="2025-11-01T01:59:07.428539082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:07.429092 containerd[1626]: time="2025-11-01T01:59:07.428857707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:07.434950 containerd[1626]: time="2025-11-01T01:59:07.434410154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:07.442152 containerd[1626]: time="2025-11-01T01:59:07.442118573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.081 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.105 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0 coredns-668d6bf9bc- kube-system 69be427e-9188-4acc-abfe-94d74b48ccf9 911 0 2025-11-01 01:58:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com coredns-668d6bf9bc-kmvw8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5b791ab724 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.105 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.169 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" HandleID="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.170 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" HandleID="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003105f0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-kmvw8", "timestamp":"2025-11-01 01:59:07.169418454 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.170 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.302 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.302 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.341 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.362 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.369 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.371 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.380 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.380 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.385 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985 Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.390 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.401 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.69/26] block=192.168.43.64/26 handle="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.401 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.69/26] handle="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.402 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:07.448595 containerd[1626]: 2025-11-01 01:59:07.402 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.69/26] IPv6=[] ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" HandleID="k8s-pod-network.4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.449360 containerd[1626]: 2025-11-01 01:59:07.408 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69be427e-9188-4acc-abfe-94d74b48ccf9", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-kmvw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5b791ab724", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:07.449360 containerd[1626]: 2025-11-01 01:59:07.410 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.69/32] ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.449360 containerd[1626]: 2025-11-01 01:59:07.410 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5b791ab724 ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.449360 containerd[1626]: 2025-11-01 01:59:07.416 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.449360 containerd[1626]: 2025-11-01 01:59:07.416 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69be427e-9188-4acc-abfe-94d74b48ccf9", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985", Pod:"coredns-668d6bf9bc-kmvw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5b791ab724", MAC:"ca:0e:60:87:ed:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:07.449360 containerd[1626]: 2025-11-01 01:59:07.435 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmvw8" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:07.513239 containerd[1626]: time="2025-11-01T01:59:07.512623610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:07.513239 containerd[1626]: time="2025-11-01T01:59:07.512690324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:07.513239 containerd[1626]: time="2025-11-01T01:59:07.512730529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:07.513239 containerd[1626]: time="2025-11-01T01:59:07.512853287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:07.545349 containerd[1626]: time="2025-11-01T01:59:07.545312852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-tnvl5,Uid:ff93fa77-947d-41bd-9b0a-6912cba460eb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2\"" Nov 1 01:59:07.555398 systemd-networkd[1267]: calif148f172a45: Gained IPv6LL Nov 1 01:59:07.593710 containerd[1626]: time="2025-11-01T01:59:07.593672243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmvw8,Uid:69be427e-9188-4acc-abfe-94d74b48ccf9,Namespace:kube-system,Attempt:1,} returns sandbox id \"4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985\"" Nov 1 01:59:07.609986 containerd[1626]: time="2025-11-01T01:59:07.609630411Z" level=info msg="CreateContainer within sandbox \"4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:59:07.641400 containerd[1626]: time="2025-11-01T01:59:07.641322451Z" level=info msg="CreateContainer within sandbox \"4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea6907a82a9ef7528eb2d4d4e5fa8ed1dc5ef5a9f0e3ab44b7d5ff69fe1d8ee2\"" Nov 1 01:59:07.642767 containerd[1626]: time="2025-11-01T01:59:07.642734356Z" level=info msg="StartContainer for \"ea6907a82a9ef7528eb2d4d4e5fa8ed1dc5ef5a9f0e3ab44b7d5ff69fe1d8ee2\"" Nov 1 01:59:07.727846 containerd[1626]: time="2025-11-01T01:59:07.727366672Z" level=info msg="StartContainer for \"ea6907a82a9ef7528eb2d4d4e5fa8ed1dc5ef5a9f0e3ab44b7d5ff69fe1d8ee2\" returns successfully" Nov 1 01:59:07.779491 containerd[1626]: time="2025-11-01T01:59:07.779359725Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:07.780655 containerd[1626]: time="2025-11-01T01:59:07.780615219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:59:07.781301 containerd[1626]: time="2025-11-01T01:59:07.780828774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:59:07.782512 kubelet[2853]: E1101 01:59:07.782434 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:07.782512 kubelet[2853]: E1101 01:59:07.782503 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:07.784762 kubelet[2853]: E1101 01:59:07.783305 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:07.786817 containerd[1626]: time="2025-11-01T01:59:07.786779267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:08.103921 containerd[1626]: time="2025-11-01T01:59:08.103374405Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:08.106551 containerd[1626]: time="2025-11-01T01:59:08.106055937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:08.106551 containerd[1626]: time="2025-11-01T01:59:08.106130988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:08.106850 kubelet[2853]: E1101 01:59:08.106598 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:08.106850 kubelet[2853]: E1101 01:59:08.106690 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:08.109731 kubelet[2853]: E1101 01:59:08.107209 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr874,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-tnvl5_calico-apiserver(ff93fa77-947d-41bd-9b0a-6912cba460eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:08.109731 kubelet[2853]: E1101 01:59:08.108668 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:59:08.110318 containerd[1626]: time="2025-11-01T01:59:08.108379838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:59:08.237999 kubelet[2853]: E1101 01:59:08.237948 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:59:08.275117 kubelet[2853]: I1101 01:59:08.273855 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kmvw8" podStartSLOduration=41.273827481 podStartE2EDuration="41.273827481s" podCreationTimestamp="2025-11-01 01:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:59:08.257054679 +0000 UTC m=+46.661443036" watchObservedRunningTime="2025-11-01 01:59:08.273827481 +0000 UTC m=+46.678215834" Nov 1 01:59:08.430720 containerd[1626]: time="2025-11-01T01:59:08.430612213Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:08.431587 containerd[1626]: time="2025-11-01T01:59:08.431503246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:59:08.431710 containerd[1626]: time="2025-11-01T01:59:08.431648323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:59:08.432092 kubelet[2853]: E1101 01:59:08.431888 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:08.432092 kubelet[2853]: E1101 01:59:08.431971 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:08.432092 kubelet[2853]: E1101 01:59:08.432252 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:08.433743 kubelet[2853]: E1101 01:59:08.433534 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:59:08.564995 kubelet[2853]: I1101 01:59:08.564813 2853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:59:08.737107 containerd[1626]: time="2025-11-01T01:59:08.736890633Z" level=info msg="StopPodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\"" Nov 1 01:59:08.738161 containerd[1626]: time="2025-11-01T01:59:08.736949143Z" level=info msg="StopPodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\"" Nov 1 01:59:08.771445 systemd-networkd[1267]: calia5b791ab724: Gained IPv6LL Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.837 [INFO][4764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.841 [INFO][4764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" iface="eth0" netns="/var/run/netns/cni-8c835f98-2c48-555c-adbd-94e3a2b60615" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" iface="eth0" netns="/var/run/netns/cni-8c835f98-2c48-555c-adbd-94e3a2b60615" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" iface="eth0" netns="/var/run/netns/cni-8c835f98-2c48-555c-adbd-94e3a2b60615" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.891 [INFO][4782] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.891 [INFO][4782] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.892 [INFO][4782] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.908 [WARNING][4782] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.908 [INFO][4782] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.912 [INFO][4782] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:08.927209 containerd[1626]: 2025-11-01 01:59:08.918 [INFO][4764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:08.927209 containerd[1626]: time="2025-11-01T01:59:08.923751803Z" level=info msg="TearDown network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" successfully" Nov 1 01:59:08.927209 containerd[1626]: time="2025-11-01T01:59:08.923793953Z" level=info msg="StopPodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" returns successfully" Nov 1 01:59:08.935541 containerd[1626]: time="2025-11-01T01:59:08.928540680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-8r8qj,Uid:53035908-eec7-4eef-b118-526472e0fe2d,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:59:08.930366 systemd[1]: run-netns-cni\x2d8c835f98\x2d2c48\x2d555c\x2dadbd\x2d94e3a2b60615.mount: Deactivated successfully. Nov 1 01:59:08.964477 systemd-networkd[1267]: cali22fa1a6a2df: Gained IPv6LL Nov 1 01:59:08.964766 systemd-networkd[1267]: calib8e05181185: Gained IPv6LL Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.837 [INFO][4763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.837 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" iface="eth0" netns="/var/run/netns/cni-8eb1454f-9cb8-b1db-c690-4253173709ad" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.838 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" iface="eth0" netns="/var/run/netns/cni-8eb1454f-9cb8-b1db-c690-4253173709ad" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" iface="eth0" netns="/var/run/netns/cni-8eb1454f-9cb8-b1db-c690-4253173709ad" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.844 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.971 [INFO][4784] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.974 [INFO][4784] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.974 [INFO][4784] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.987 [WARNING][4784] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.988 [INFO][4784] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:08.990 [INFO][4784] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:09.024107 containerd[1626]: 2025-11-01 01:59:09.001 [INFO][4763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:09.028522 systemd[1]: run-netns-cni\x2d8eb1454f\x2d9cb8\x2db1db\x2dc690\x2d4253173709ad.mount: Deactivated successfully. Nov 1 01:59:09.034521 containerd[1626]: time="2025-11-01T01:59:09.033433902Z" level=info msg="TearDown network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" successfully" Nov 1 01:59:09.034521 containerd[1626]: time="2025-11-01T01:59:09.033472444Z" level=info msg="StopPodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" returns successfully" Nov 1 01:59:09.036783 containerd[1626]: time="2025-11-01T01:59:09.036743508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7slft,Uid:088192a6-ad05-483b-b9cf-bbb1b8b9bbb7,Namespace:kube-system,Attempt:1,}" Nov 1 01:59:09.133241 kernel: bpftool[4844]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 01:59:09.259355 kubelet[2853]: E1101 01:59:09.257266 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:59:09.259355 kubelet[2853]: E1101 01:59:09.257743 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:59:09.368767 systemd-networkd[1267]: cali86efaea5bbe: Link UP Nov 1 01:59:09.371649 systemd-networkd[1267]: cali86efaea5bbe: Gained carrier Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.158 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0 calico-apiserver-7589849df- calico-apiserver 53035908-eec7-4eef-b118-526472e0fe2d 964 0 2025-11-01 01:58:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7589849df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com calico-apiserver-7589849df-8r8qj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali86efaea5bbe [] [] }} ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.158 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.239 [INFO][4849] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" HandleID="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.241 [INFO][4849] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" HandleID="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"calico-apiserver-7589849df-8r8qj", "timestamp":"2025-11-01 01:59:09.239788594 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.241 [INFO][4849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.243 [INFO][4849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.243 [INFO][4849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.265 [INFO][4849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.295 [INFO][4849] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.306 [INFO][4849] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.311 [INFO][4849] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.315 [INFO][4849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.315 [INFO][4849] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.320 [INFO][4849] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415 Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.328 [INFO][4849] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.338 [INFO][4849] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.70/26] block=192.168.43.64/26 handle="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.338 [INFO][4849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.70/26] handle="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.338 [INFO][4849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:09.431857 containerd[1626]: 2025-11-01 01:59:09.338 [INFO][4849] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.70/26] IPv6=[] ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" HandleID="k8s-pod-network.ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.436038 containerd[1626]: 2025-11-01 01:59:09.350 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"53035908-eec7-4eef-b118-526472e0fe2d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7589849df-8r8qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86efaea5bbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:09.436038 containerd[1626]: 2025-11-01 01:59:09.351 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.70/32] ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.436038 containerd[1626]: 2025-11-01 01:59:09.352 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86efaea5bbe ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.436038 containerd[1626]: 2025-11-01 01:59:09.372 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.436038 containerd[1626]: 2025-11-01 01:59:09.375 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"53035908-eec7-4eef-b118-526472e0fe2d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415", Pod:"calico-apiserver-7589849df-8r8qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86efaea5bbe", MAC:"4e:5f:0c:7b:2e:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:09.436038 containerd[1626]: 2025-11-01 01:59:09.404 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415" Namespace="calico-apiserver" Pod="calico-apiserver-7589849df-8r8qj" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:09.507239 containerd[1626]: time="2025-11-01T01:59:09.505130825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:09.509196 containerd[1626]: time="2025-11-01T01:59:09.508574834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:09.509196 containerd[1626]: time="2025-11-01T01:59:09.508600886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:09.509196 containerd[1626]: time="2025-11-01T01:59:09.508718043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:09.590846 systemd-networkd[1267]: cali18643e34d0f: Link UP Nov 1 01:59:09.599545 systemd-networkd[1267]: cali18643e34d0f: Gained carrier Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.286 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0 coredns-668d6bf9bc- kube-system 088192a6-ad05-483b-b9cf-bbb1b8b9bbb7 963 0 2025-11-01 01:58:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com coredns-668d6bf9bc-7slft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18643e34d0f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.286 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.462 [INFO][4879] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" HandleID="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.463 [INFO][4879] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" HandleID="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-7slft", "timestamp":"2025-11-01 01:59:09.462223711 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.463 [INFO][4879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.463 [INFO][4879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.463 [INFO][4879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.475 [INFO][4879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.482 [INFO][4879] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.498 [INFO][4879] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.502 [INFO][4879] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.509 [INFO][4879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.509 [INFO][4879] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.517 [INFO][4879] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.536 [INFO][4879] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.554 [INFO][4879] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.71/26] block=192.168.43.64/26 handle="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.555 [INFO][4879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.71/26] handle="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.555 [INFO][4879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:09.640649 containerd[1626]: 2025-11-01 01:59:09.556 [INFO][4879] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.71/26] IPv6=[] ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" HandleID="k8s-pod-network.33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.643608 containerd[1626]: 2025-11-01 01:59:09.575 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-7slft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18643e34d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:09.643608 containerd[1626]: 2025-11-01 01:59:09.575 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.71/32] ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.643608 containerd[1626]: 2025-11-01 01:59:09.575 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18643e34d0f ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.643608 containerd[1626]: 2025-11-01 01:59:09.603 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.643608 containerd[1626]: 2025-11-01 01:59:09.605 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f", Pod:"coredns-668d6bf9bc-7slft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18643e34d0f", MAC:"36:e8:b3:55:e8:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:09.643608 containerd[1626]: 2025-11-01 01:59:09.630 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7slft" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:09.688191 containerd[1626]: time="2025-11-01T01:59:09.678444407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:09.688191 containerd[1626]: time="2025-11-01T01:59:09.678493080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:09.688191 containerd[1626]: time="2025-11-01T01:59:09.678503415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:09.688191 containerd[1626]: time="2025-11-01T01:59:09.678594284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:09.713622 containerd[1626]: time="2025-11-01T01:59:09.712484036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7589849df-8r8qj,Uid:53035908-eec7-4eef-b118-526472e0fe2d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415\"" Nov 1 01:59:09.716998 containerd[1626]: time="2025-11-01T01:59:09.715482797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:09.739015 containerd[1626]: time="2025-11-01T01:59:09.738984296Z" level=info msg="StopPodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\"" Nov 1 01:59:09.784181 containerd[1626]: time="2025-11-01T01:59:09.781353936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7slft,Uid:088192a6-ad05-483b-b9cf-bbb1b8b9bbb7,Namespace:kube-system,Attempt:1,} returns sandbox id \"33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f\"" Nov 1 01:59:09.797999 containerd[1626]: time="2025-11-01T01:59:09.797582276Z" level=info msg="CreateContainer within sandbox \"33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:59:09.825031 containerd[1626]: time="2025-11-01T01:59:09.824993233Z" level=info msg="CreateContainer within sandbox \"33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca6ea43d240c9d1bb80635e4c1bc1754bbe17ba88d1dd5e1c2cc4e55de96221d\"" Nov 1 01:59:09.830115 containerd[1626]: time="2025-11-01T01:59:09.829812737Z" level=info msg="StartContainer for \"ca6ea43d240c9d1bb80635e4c1bc1754bbe17ba88d1dd5e1c2cc4e55de96221d\"" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.826 [INFO][4988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.830 [INFO][4988] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" iface="eth0" netns="/var/run/netns/cni-a5ab13fb-812e-cb04-fe44-4aee385b6503" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.833 [INFO][4988] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" iface="eth0" netns="/var/run/netns/cni-a5ab13fb-812e-cb04-fe44-4aee385b6503" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.836 [INFO][4988] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" iface="eth0" netns="/var/run/netns/cni-a5ab13fb-812e-cb04-fe44-4aee385b6503" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.836 [INFO][4988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.837 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.906 [INFO][5004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.906 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.906 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.915 [WARNING][5004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.915 [INFO][5004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.917 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:09.929184 containerd[1626]: 2025-11-01 01:59:09.919 [INFO][4988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:09.930657 containerd[1626]: time="2025-11-01T01:59:09.929516629Z" level=info msg="TearDown network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" successfully" Nov 1 01:59:09.930657 containerd[1626]: time="2025-11-01T01:59:09.929547893Z" level=info msg="StopPodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" returns successfully" Nov 1 01:59:09.932259 containerd[1626]: time="2025-11-01T01:59:09.932233036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d498fd-4kbzk,Uid:fdce623b-f498-4a86-b9d7-a71f9568f87d,Namespace:calico-system,Attempt:1,}" Nov 1 01:59:10.031258 containerd[1626]: time="2025-11-01T01:59:10.029660662Z" level=info msg="StartContainer for \"ca6ea43d240c9d1bb80635e4c1bc1754bbe17ba88d1dd5e1c2cc4e55de96221d\" returns successfully" Nov 1 01:59:10.035700 containerd[1626]: time="2025-11-01T01:59:10.035557673Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:10.036095 containerd[1626]: time="2025-11-01T01:59:10.036053794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:10.037156 containerd[1626]: time="2025-11-01T01:59:10.036254182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:10.037233 kubelet[2853]: E1101 01:59:10.036398 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:10.037233 kubelet[2853]: E1101 01:59:10.036459 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:10.037233 kubelet[2853]: E1101 01:59:10.036618 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8xmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-8r8qj_calico-apiserver(53035908-eec7-4eef-b118-526472e0fe2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:10.037983 kubelet[2853]: E1101 01:59:10.037913 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:10.168728 systemd-networkd[1267]: vxlan.calico: Link UP Nov 1 01:59:10.168736 systemd-networkd[1267]: vxlan.calico: Gained carrier Nov 1 01:59:10.251687 systemd[1]: run-netns-cni\x2da5ab13fb\x2d812e\x2dcb04\x2dfe44\x2d4aee385b6503.mount: Deactivated successfully. Nov 1 01:59:10.278769 kubelet[2853]: E1101 01:59:10.276530 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:10.285241 kubelet[2853]: I1101 01:59:10.283556 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7slft" podStartSLOduration=43.283530241 podStartE2EDuration="43.283530241s" podCreationTimestamp="2025-11-01 01:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:59:10.281787447 +0000 UTC m=+48.686175804" watchObservedRunningTime="2025-11-01 01:59:10.283530241 +0000 UTC m=+48.687918586" Nov 1 01:59:10.300316 systemd-networkd[1267]: cali8eeeb0e21ff: Link UP Nov 1 01:59:10.307312 systemd-networkd[1267]: cali8eeeb0e21ff: Gained carrier Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.069 [INFO][5046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0 calico-kube-controllers-549d498fd- calico-system fdce623b-f498-4a86-b9d7-a71f9568f87d 983 0 2025-11-01 01:58:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:549d498fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gnbw4.gb1.brightbox.com calico-kube-controllers-549d498fd-4kbzk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8eeeb0e21ff [] [] }} ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.069 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.143 [INFO][5071] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" HandleID="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.145 [INFO][5071] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" HandleID="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e90), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gnbw4.gb1.brightbox.com", "pod":"calico-kube-controllers-549d498fd-4kbzk", "timestamp":"2025-11-01 01:59:10.143834554 +0000 UTC"}, Hostname:"srv-gnbw4.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.145 [INFO][5071] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.145 [INFO][5071] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.145 [INFO][5071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gnbw4.gb1.brightbox.com' Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.155 [INFO][5071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.162 [INFO][5071] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.232 [INFO][5071] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.234 [INFO][5071] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.237 [INFO][5071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.237 [INFO][5071] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.242 [INFO][5071] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.253 [INFO][5071] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.270 [INFO][5071] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.72/26] block=192.168.43.64/26 handle="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.270 [INFO][5071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.72/26] handle="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" host="srv-gnbw4.gb1.brightbox.com" Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.270 [INFO][5071] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:10.347149 containerd[1626]: 2025-11-01 01:59:10.271 [INFO][5071] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.72/26] IPv6=[] ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" HandleID="k8s-pod-network.ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.351544 containerd[1626]: 2025-11-01 01:59:10.280 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0", GenerateName:"calico-kube-controllers-549d498fd-", Namespace:"calico-system", SelfLink:"", UID:"fdce623b-f498-4a86-b9d7-a71f9568f87d", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d498fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-549d498fd-4kbzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8eeeb0e21ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:10.351544 containerd[1626]: 2025-11-01 01:59:10.280 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.72/32] ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.351544 containerd[1626]: 2025-11-01 01:59:10.280 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eeeb0e21ff ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.351544 containerd[1626]: 2025-11-01 01:59:10.310 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.351544 containerd[1626]: 2025-11-01 01:59:10.320 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0", GenerateName:"calico-kube-controllers-549d498fd-", Namespace:"calico-system", SelfLink:"", UID:"fdce623b-f498-4a86-b9d7-a71f9568f87d", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d498fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac", Pod:"calico-kube-controllers-549d498fd-4kbzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8eeeb0e21ff", MAC:"5e:bd:1c:3d:0c:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:10.351544 containerd[1626]: 2025-11-01 01:59:10.342 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac" Namespace="calico-system" Pod="calico-kube-controllers-549d498fd-4kbzk" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:10.397981 containerd[1626]: time="2025-11-01T01:59:10.396807784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:59:10.397981 containerd[1626]: time="2025-11-01T01:59:10.396867570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:59:10.397981 containerd[1626]: time="2025-11-01T01:59:10.396901755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:10.397981 containerd[1626]: time="2025-11-01T01:59:10.397079541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:59:10.555615 containerd[1626]: time="2025-11-01T01:59:10.555206971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d498fd-4kbzk,Uid:fdce623b-f498-4a86-b9d7-a71f9568f87d,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac\"" Nov 1 01:59:10.560781 containerd[1626]: time="2025-11-01T01:59:10.560316548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:59:10.755270 systemd-networkd[1267]: cali86efaea5bbe: Gained IPv6LL Nov 1 01:59:10.891679 containerd[1626]: time="2025-11-01T01:59:10.891073854Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:10.892826 containerd[1626]: time="2025-11-01T01:59:10.892725307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:59:10.892826 containerd[1626]: time="2025-11-01T01:59:10.892741262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:59:10.893330 kubelet[2853]: E1101 01:59:10.893076 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:10.893330 kubelet[2853]: E1101 01:59:10.893163 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:10.893559 kubelet[2853]: E1101 01:59:10.893323 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdj45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d498fd-4kbzk_calico-system(fdce623b-f498-4a86-b9d7-a71f9568f87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:10.895094 kubelet[2853]: E1101 01:59:10.895031 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:59:11.290766 kubelet[2853]: E1101 01:59:11.288840 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:59:11.294348 kubelet[2853]: E1101 01:59:11.293816 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:11.459499 systemd-networkd[1267]: cali18643e34d0f: Gained IPv6LL Nov 1 01:59:11.907540 systemd-networkd[1267]: cali8eeeb0e21ff: Gained IPv6LL Nov 1 01:59:12.035727 systemd-networkd[1267]: vxlan.calico: Gained IPv6LL Nov 1 01:59:12.290502 kubelet[2853]: E1101 01:59:12.290252 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:59:15.742092 containerd[1626]: time="2025-11-01T01:59:15.741938197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:59:16.056381 containerd[1626]: time="2025-11-01T01:59:16.056008089Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:16.057517 containerd[1626]: time="2025-11-01T01:59:16.057286498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:59:16.057517 containerd[1626]: time="2025-11-01T01:59:16.057384361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:59:16.057836 kubelet[2853]: E1101 01:59:16.057734 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:16.058684 kubelet[2853]: E1101 01:59:16.057850 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:16.058684 kubelet[2853]: E1101 01:59:16.058123 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9caf0efd36747098a83dd07c388322c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:16.063183 containerd[1626]: time="2025-11-01T01:59:16.062557159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:59:16.377340 containerd[1626]: time="2025-11-01T01:59:16.377183304Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:16.378668 containerd[1626]: time="2025-11-01T01:59:16.378598602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:59:16.379276 containerd[1626]: time="2025-11-01T01:59:16.378981613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:59:16.379637 kubelet[2853]: E1101 01:59:16.379504 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:16.379637 kubelet[2853]: E1101 01:59:16.379620 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:16.379950 kubelet[2853]: E1101 01:59:16.379825 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:16.381699 kubelet[2853]: E1101 01:59:16.381639 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 01:59:20.740095 containerd[1626]: time="2025-11-01T01:59:20.739942558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:59:21.068506 containerd[1626]: time="2025-11-01T01:59:21.068213764Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:21.071369 containerd[1626]: time="2025-11-01T01:59:21.069407554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:59:21.071369 containerd[1626]: time="2025-11-01T01:59:21.069510146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:59:21.071647 kubelet[2853]: E1101 01:59:21.069884 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:21.071647 kubelet[2853]: E1101 01:59:21.069981 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:21.071647 kubelet[2853]: E1101 01:59:21.070512 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:21.077410 containerd[1626]: time="2025-11-01T01:59:21.076447960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:59:21.393699 containerd[1626]: time="2025-11-01T01:59:21.393549680Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:21.394972 containerd[1626]: time="2025-11-01T01:59:21.394525457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:59:21.394972 containerd[1626]: time="2025-11-01T01:59:21.394678287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:21.395273 kubelet[2853]: E1101 01:59:21.394914 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:21.395273 kubelet[2853]: E1101 01:59:21.394988 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:21.396229 kubelet[2853]: E1101 01:59:21.395417 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27q6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qxr6w_calico-system(7d2ab813-9398-4622-9019-515028818713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:21.396679 containerd[1626]: time="2025-11-01T01:59:21.395987399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:59:21.397881 kubelet[2853]: E1101 01:59:21.397812 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:59:21.705947 containerd[1626]: time="2025-11-01T01:59:21.705606538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:21.707600 containerd[1626]: time="2025-11-01T01:59:21.707011069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:59:21.707600 containerd[1626]: time="2025-11-01T01:59:21.707210582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:59:21.707783 kubelet[2853]: E1101 01:59:21.707406 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:21.707783 kubelet[2853]: E1101 01:59:21.707491 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:21.707783 kubelet[2853]: E1101 01:59:21.707690 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:21.710699 kubelet[2853]: E1101 01:59:21.710242 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:59:21.769350 containerd[1626]: time="2025-11-01T01:59:21.769310086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:21.774373 containerd[1626]: time="2025-11-01T01:59:21.774341541Z" level=info msg="StopPodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\"" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.848 [WARNING][5243] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"53035908-eec7-4eef-b118-526472e0fe2d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415", Pod:"calico-apiserver-7589849df-8r8qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86efaea5bbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.849 [INFO][5243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.849 [INFO][5243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" iface="eth0" netns="" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.849 [INFO][5243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.849 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.883 [INFO][5250] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.883 [INFO][5250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.883 [INFO][5250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.892 [WARNING][5250] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.893 [INFO][5250] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.898 [INFO][5250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:21.904830 containerd[1626]: 2025-11-01 01:59:21.902 [INFO][5243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:21.906381 containerd[1626]: time="2025-11-01T01:59:21.905692285Z" level=info msg="TearDown network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" successfully" Nov 1 01:59:21.906381 containerd[1626]: time="2025-11-01T01:59:21.905763466Z" level=info msg="StopPodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" returns successfully" Nov 1 01:59:21.907641 containerd[1626]: time="2025-11-01T01:59:21.907467032Z" level=info msg="RemovePodSandbox for \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\"" Nov 1 01:59:21.907641 containerd[1626]: time="2025-11-01T01:59:21.907523524Z" level=info msg="Forcibly stopping sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\"" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:21.976 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"53035908-eec7-4eef-b118-526472e0fe2d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"ef2d6b6788352c90e062df129a6b9bbc91bc6c6a0d471e32cb3c372e24471415", Pod:"calico-apiserver-7589849df-8r8qj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86efaea5bbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:21.976 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:21.976 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" iface="eth0" netns="" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:21.976 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:21.976 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.014 [INFO][5272] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.014 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.014 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.021 [WARNING][5272] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.021 [INFO][5272] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" HandleID="k8s-pod-network.990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--8r8qj-eth0" Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.022 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.027288 containerd[1626]: 2025-11-01 01:59:22.024 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918" Nov 1 01:59:22.027288 containerd[1626]: time="2025-11-01T01:59:22.026912896Z" level=info msg="TearDown network for sandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" successfully" Nov 1 01:59:22.031234 containerd[1626]: time="2025-11-01T01:59:22.031095750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:22.031234 containerd[1626]: time="2025-11-01T01:59:22.031212983Z" level=info msg="RemovePodSandbox \"990ab82056f766af1512d4f92358aa89970a0fbfdaea2bbb2993ff14debba918\" returns successfully" Nov 1 01:59:22.032648 containerd[1626]: time="2025-11-01T01:59:22.032467746Z" level=info msg="StopPodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\"" Nov 1 01:59:22.101176 containerd[1626]: time="2025-11-01T01:59:22.101044530Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:22.102194 containerd[1626]: time="2025-11-01T01:59:22.101913724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:22.102194 containerd[1626]: time="2025-11-01T01:59:22.101966958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:22.102314 kubelet[2853]: E1101 01:59:22.102252 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:22.103330 kubelet[2853]: E1101 01:59:22.102321 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:22.103330 kubelet[2853]: E1101 01:59:22.102528 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr874,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-tnvl5_calico-apiserver(ff93fa77-947d-41bd-9b0a-6912cba460eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:22.103848 kubelet[2853]: E1101 01:59:22.103803 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.082 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7d2ab813-9398-4622-9019-515028818713", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"8472be175764213a272de39ab7399973304268436b462a59487842144087429b", Pod:"goldmane-666569f655-qxr6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif148f172a45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.082 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.082 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" iface="eth0" netns="" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.082 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.082 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.116 [INFO][5294] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.116 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.116 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.125 [WARNING][5294] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.125 [INFO][5294] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.127 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.131316 containerd[1626]: 2025-11-01 01:59:22.129 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.132376 containerd[1626]: time="2025-11-01T01:59:22.131788087Z" level=info msg="TearDown network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" successfully" Nov 1 01:59:22.132376 containerd[1626]: time="2025-11-01T01:59:22.131820684Z" level=info msg="StopPodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" returns successfully" Nov 1 01:59:22.132675 containerd[1626]: time="2025-11-01T01:59:22.132648793Z" level=info msg="RemovePodSandbox for \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\"" Nov 1 01:59:22.132719 containerd[1626]: time="2025-11-01T01:59:22.132700132Z" level=info msg="Forcibly stopping sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\"" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.187 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7d2ab813-9398-4622-9019-515028818713", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"8472be175764213a272de39ab7399973304268436b462a59487842144087429b", Pod:"goldmane-666569f655-qxr6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif148f172a45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.187 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.187 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" iface="eth0" netns="" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.187 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.187 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.218 [INFO][5315] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.218 [INFO][5315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.219 [INFO][5315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.226 [WARNING][5315] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.226 [INFO][5315] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" HandleID="k8s-pod-network.6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Workload="srv--gnbw4.gb1.brightbox.com-k8s-goldmane--666569f655--qxr6w-eth0" Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.228 [INFO][5315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.232746 containerd[1626]: 2025-11-01 01:59:22.230 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4" Nov 1 01:59:22.232746 containerd[1626]: time="2025-11-01T01:59:22.232659668Z" level=info msg="TearDown network for sandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" successfully" Nov 1 01:59:22.236502 containerd[1626]: time="2025-11-01T01:59:22.236471004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:22.236650 containerd[1626]: time="2025-11-01T01:59:22.236635596Z" level=info msg="RemovePodSandbox \"6e74a46bc442eb18b2ecdb1bdf13daa5759972daf5e92cb61b11e90270c475a4\" returns successfully" Nov 1 01:59:22.237647 containerd[1626]: time="2025-11-01T01:59:22.237618224Z" level=info msg="StopPodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\"" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.283 [WARNING][5329] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.283 [INFO][5329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.283 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" iface="eth0" netns="" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.283 [INFO][5329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.283 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.322 [INFO][5336] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.323 [INFO][5336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.323 [INFO][5336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.331 [WARNING][5336] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.331 [INFO][5336] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.335 [INFO][5336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.341279 containerd[1626]: 2025-11-01 01:59:22.338 [INFO][5329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.341279 containerd[1626]: time="2025-11-01T01:59:22.341172346Z" level=info msg="TearDown network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" successfully" Nov 1 01:59:22.341279 containerd[1626]: time="2025-11-01T01:59:22.341202355Z" level=info msg="StopPodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" returns successfully" Nov 1 01:59:22.341788 containerd[1626]: time="2025-11-01T01:59:22.341693378Z" level=info msg="RemovePodSandbox for \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\"" Nov 1 01:59:22.341788 containerd[1626]: time="2025-11-01T01:59:22.341720949Z" level=info msg="Forcibly stopping sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\"" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.397 [WARNING][5350] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" WorkloadEndpoint="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.398 [INFO][5350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.398 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" iface="eth0" netns="" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.398 [INFO][5350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.398 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.430 [INFO][5357] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.430 [INFO][5357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.430 [INFO][5357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.439 [WARNING][5357] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.439 [INFO][5357] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" HandleID="k8s-pod-network.6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-whisker--7bf9fcd996--mjzn9-eth0" Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.446 [INFO][5357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.450837 containerd[1626]: 2025-11-01 01:59:22.448 [INFO][5350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d" Nov 1 01:59:22.451540 containerd[1626]: time="2025-11-01T01:59:22.450886823Z" level=info msg="TearDown network for sandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" successfully" Nov 1 01:59:22.454715 containerd[1626]: time="2025-11-01T01:59:22.454637858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:22.454831 containerd[1626]: time="2025-11-01T01:59:22.454773079Z" level=info msg="RemovePodSandbox \"6ded7211ffbaf841517a7a3d3c46a70d821ea28cf292955789c59c4b85a7b31d\" returns successfully" Nov 1 01:59:22.456030 containerd[1626]: time="2025-11-01T01:59:22.455706928Z" level=info msg="StopPodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\"" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.507 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff93fa77-947d-41bd-9b0a-6912cba460eb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2", Pod:"calico-apiserver-7589849df-tnvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e05181185", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.507 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.507 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" iface="eth0" netns="" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.507 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.507 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.538 [INFO][5378] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.538 [INFO][5378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.538 [INFO][5378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.546 [WARNING][5378] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.546 [INFO][5378] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.548 [INFO][5378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.552326 containerd[1626]: 2025-11-01 01:59:22.550 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.552920 containerd[1626]: time="2025-11-01T01:59:22.552390977Z" level=info msg="TearDown network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" successfully" Nov 1 01:59:22.552920 containerd[1626]: time="2025-11-01T01:59:22.552422566Z" level=info msg="StopPodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" returns successfully" Nov 1 01:59:22.553949 containerd[1626]: time="2025-11-01T01:59:22.553562583Z" level=info msg="RemovePodSandbox for \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\"" Nov 1 01:59:22.553949 containerd[1626]: time="2025-11-01T01:59:22.553595939Z" level=info msg="Forcibly stopping sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\"" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.597 [WARNING][5392] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0", GenerateName:"calico-apiserver-7589849df-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff93fa77-947d-41bd-9b0a-6912cba460eb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7589849df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"17ba4ce828a4be48555765d0d03d4f9c2040e080db42f68d60c48c663cb3c0b2", Pod:"calico-apiserver-7589849df-tnvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e05181185", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.597 [INFO][5392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.597 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" iface="eth0" netns="" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.597 [INFO][5392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.597 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.636 [INFO][5400] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.638 [INFO][5400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.638 [INFO][5400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.646 [WARNING][5400] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.646 [INFO][5400] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" HandleID="k8s-pod-network.fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--apiserver--7589849df--tnvl5-eth0" Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.648 [INFO][5400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.651714 containerd[1626]: 2025-11-01 01:59:22.649 [INFO][5392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1" Nov 1 01:59:22.652392 containerd[1626]: time="2025-11-01T01:59:22.651794051Z" level=info msg="TearDown network for sandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" successfully" Nov 1 01:59:22.657392 containerd[1626]: time="2025-11-01T01:59:22.657335200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:22.657392 containerd[1626]: time="2025-11-01T01:59:22.657417119Z" level=info msg="RemovePodSandbox \"fc7ab09e37f2d86a69ee97abefa59c82b988de8345d98eb041c7350b678bb5d1\" returns successfully" Nov 1 01:59:22.659461 containerd[1626]: time="2025-11-01T01:59:22.658358997Z" level=info msg="StopPodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\"" Nov 1 01:59:22.738091 containerd[1626]: time="2025-11-01T01:59:22.738046054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.707 [WARNING][5414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f", Pod:"coredns-668d6bf9bc-7slft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18643e34d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.707 [INFO][5414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.707 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" iface="eth0" netns="" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.707 [INFO][5414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.707 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.741 [INFO][5421] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.741 [INFO][5421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.742 [INFO][5421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.750 [WARNING][5421] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.750 [INFO][5421] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.755 [INFO][5421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.759608 containerd[1626]: 2025-11-01 01:59:22.757 [INFO][5414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.761468 containerd[1626]: time="2025-11-01T01:59:22.759669241Z" level=info msg="TearDown network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" successfully" Nov 1 01:59:22.761468 containerd[1626]: time="2025-11-01T01:59:22.759702933Z" level=info msg="StopPodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" returns successfully" Nov 1 01:59:22.761468 containerd[1626]: time="2025-11-01T01:59:22.760630629Z" level=info msg="RemovePodSandbox for \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\"" Nov 1 01:59:22.761468 containerd[1626]: time="2025-11-01T01:59:22.760662042Z" level=info msg="Forcibly stopping sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\"" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.816 [WARNING][5435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"088192a6-ad05-483b-b9cf-bbb1b8b9bbb7", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"33e9a0c38d67f9986f64262d40f8102d76c8e07ed3995d2d84e77775ca96374f", Pod:"coredns-668d6bf9bc-7slft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18643e34d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.816 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.817 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" iface="eth0" netns="" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.817 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.817 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.855 [INFO][5442] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.855 [INFO][5442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.855 [INFO][5442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.870 [WARNING][5442] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.871 [INFO][5442] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" HandleID="k8s-pod-network.b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--7slft-eth0" Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.874 [INFO][5442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.884636 containerd[1626]: 2025-11-01 01:59:22.880 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d" Nov 1 01:59:22.887605 containerd[1626]: time="2025-11-01T01:59:22.884736606Z" level=info msg="TearDown network for sandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" successfully" Nov 1 01:59:22.889559 containerd[1626]: time="2025-11-01T01:59:22.889507303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:22.890621 containerd[1626]: time="2025-11-01T01:59:22.889580563Z" level=info msg="RemovePodSandbox \"b0148a020b9934039f869cf2de7fbe1f179377f7a574461e524feb3293448d2d\" returns successfully" Nov 1 01:59:22.890795 containerd[1626]: time="2025-11-01T01:59:22.890743253Z" level=info msg="StopPodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\"" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.933 [WARNING][5457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0", GenerateName:"calico-kube-controllers-549d498fd-", Namespace:"calico-system", SelfLink:"", UID:"fdce623b-f498-4a86-b9d7-a71f9568f87d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d498fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac", Pod:"calico-kube-controllers-549d498fd-4kbzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8eeeb0e21ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.933 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.933 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" iface="eth0" netns="" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.933 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.933 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.965 [INFO][5465] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.966 [INFO][5465] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.966 [INFO][5465] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.974 [WARNING][5465] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.974 [INFO][5465] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.976 [INFO][5465] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:22.979953 containerd[1626]: 2025-11-01 01:59:22.977 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:22.981854 containerd[1626]: time="2025-11-01T01:59:22.980220930Z" level=info msg="TearDown network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" successfully" Nov 1 01:59:22.981854 containerd[1626]: time="2025-11-01T01:59:22.980276001Z" level=info msg="StopPodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" returns successfully" Nov 1 01:59:22.981854 containerd[1626]: time="2025-11-01T01:59:22.980911387Z" level=info msg="RemovePodSandbox for \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\"" Nov 1 01:59:22.981854 containerd[1626]: time="2025-11-01T01:59:22.980945397Z" level=info msg="Forcibly stopping sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\"" Nov 1 01:59:23.055878 containerd[1626]: time="2025-11-01T01:59:23.055654740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:23.058038 containerd[1626]: time="2025-11-01T01:59:23.057774008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:23.058038 containerd[1626]: time="2025-11-01T01:59:23.057796110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:23.058590 kubelet[2853]: E1101 01:59:23.058516 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:23.058819 kubelet[2853]: E1101 01:59:23.058652 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:23.059364 kubelet[2853]: E1101 01:59:23.059200 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8xmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-8r8qj_calico-apiserver(53035908-eec7-4eef-b118-526472e0fe2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:23.061285 kubelet[2853]: E1101 01:59:23.061231 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.029 [WARNING][5479] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0", GenerateName:"calico-kube-controllers-549d498fd-", Namespace:"calico-system", SelfLink:"", UID:"fdce623b-f498-4a86-b9d7-a71f9568f87d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d498fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"ad6e44ab0bd05cf4b472f72c888b9854443c5da6cb1b518f48748f76540f66ac", Pod:"calico-kube-controllers-549d498fd-4kbzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8eeeb0e21ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.029 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.029 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" iface="eth0" netns="" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.029 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.029 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.055 [INFO][5486] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.056 [INFO][5486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.056 [INFO][5486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.065 [WARNING][5486] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.065 [INFO][5486] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" HandleID="k8s-pod-network.d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Workload="srv--gnbw4.gb1.brightbox.com-k8s-calico--kube--controllers--549d498fd--4kbzk-eth0" Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.066 [INFO][5486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:23.073176 containerd[1626]: 2025-11-01 01:59:23.069 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7" Nov 1 01:59:23.073176 containerd[1626]: time="2025-11-01T01:59:23.072400109Z" level=info msg="TearDown network for sandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" successfully" Nov 1 01:59:23.076427 containerd[1626]: time="2025-11-01T01:59:23.076394811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:23.076565 containerd[1626]: time="2025-11-01T01:59:23.076548889Z" level=info msg="RemovePodSandbox \"d007d329fb8284ee14fdd7c02527b705961db693fb68abd0496aa7f86b4c5ba7\" returns successfully" Nov 1 01:59:23.077172 containerd[1626]: time="2025-11-01T01:59:23.077149986Z" level=info msg="StopPodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\"" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.117 [WARNING][5501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69be427e-9188-4acc-abfe-94d74b48ccf9", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985", Pod:"coredns-668d6bf9bc-kmvw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5b791ab724", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.117 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.117 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" iface="eth0" netns="" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.117 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.117 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.145 [INFO][5508] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.145 [INFO][5508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.145 [INFO][5508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.153 [WARNING][5508] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.153 [INFO][5508] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.155 [INFO][5508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:23.159801 containerd[1626]: 2025-11-01 01:59:23.157 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.160442 containerd[1626]: time="2025-11-01T01:59:23.159857191Z" level=info msg="TearDown network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" successfully" Nov 1 01:59:23.160442 containerd[1626]: time="2025-11-01T01:59:23.159899073Z" level=info msg="StopPodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" returns successfully" Nov 1 01:59:23.160942 containerd[1626]: time="2025-11-01T01:59:23.160910792Z" level=info msg="RemovePodSandbox for \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\"" Nov 1 01:59:23.160982 containerd[1626]: time="2025-11-01T01:59:23.160960152Z" level=info msg="Forcibly stopping sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\"" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.210 [WARNING][5522] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69be427e-9188-4acc-abfe-94d74b48ccf9", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"4e263748cf69cf4b05dd7071012e9c2fca6cbc875b6b682cecabd27ef0cf2985", Pod:"coredns-668d6bf9bc-kmvw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5b791ab724", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.210 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.210 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" iface="eth0" netns="" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.210 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.210 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.238 [INFO][5529] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.238 [INFO][5529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.238 [INFO][5529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.245 [WARNING][5529] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.246 [INFO][5529] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" HandleID="k8s-pod-network.59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Workload="srv--gnbw4.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kmvw8-eth0" Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.248 [INFO][5529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:23.254926 containerd[1626]: 2025-11-01 01:59:23.250 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff" Nov 1 01:59:23.254926 containerd[1626]: time="2025-11-01T01:59:23.253334996Z" level=info msg="TearDown network for sandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" successfully" Nov 1 01:59:23.256576 containerd[1626]: time="2025-11-01T01:59:23.256534419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:23.256814 containerd[1626]: time="2025-11-01T01:59:23.256750276Z" level=info msg="RemovePodSandbox \"59c8aa28fe58eb7f4e11683604b4c84885b1a91ac96437eeced6b9dd251fc4ff\" returns successfully" Nov 1 01:59:23.258102 containerd[1626]: time="2025-11-01T01:59:23.258023581Z" level=info msg="StopPodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\"" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.306 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4caf741f-c22d-4e76-9e9d-18f81ca6bba2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0", Pod:"csi-node-driver-b5qvt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22fa1a6a2df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.307 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.307 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" iface="eth0" netns="" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.307 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.307 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.335 [INFO][5551] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.335 [INFO][5551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.335 [INFO][5551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.342 [WARNING][5551] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.343 [INFO][5551] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.344 [INFO][5551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:23.348593 containerd[1626]: 2025-11-01 01:59:23.346 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.349392 containerd[1626]: time="2025-11-01T01:59:23.348626335Z" level=info msg="TearDown network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" successfully" Nov 1 01:59:23.349392 containerd[1626]: time="2025-11-01T01:59:23.348659085Z" level=info msg="StopPodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" returns successfully" Nov 1 01:59:23.349659 containerd[1626]: time="2025-11-01T01:59:23.349638963Z" level=info msg="RemovePodSandbox for \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\"" Nov 1 01:59:23.349701 containerd[1626]: time="2025-11-01T01:59:23.349671625Z" level=info msg="Forcibly stopping sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\"" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.397 [WARNING][5565] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4caf741f-c22d-4e76-9e9d-18f81ca6bba2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gnbw4.gb1.brightbox.com", ContainerID:"b66a70c817e13c398f807547fec4b355b7ab0e36c69f58977e72fec8c66c4bc0", Pod:"csi-node-driver-b5qvt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22fa1a6a2df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.397 [INFO][5565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.397 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" iface="eth0" netns="" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.397 [INFO][5565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.397 [INFO][5565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.427 [INFO][5572] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.427 [INFO][5572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.427 [INFO][5572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.435 [WARNING][5572] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.435 [INFO][5572] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" HandleID="k8s-pod-network.b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Workload="srv--gnbw4.gb1.brightbox.com-k8s-csi--node--driver--b5qvt-eth0" Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.436 [INFO][5572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:59:23.440605 containerd[1626]: 2025-11-01 01:59:23.438 [INFO][5565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35" Nov 1 01:59:23.441169 containerd[1626]: time="2025-11-01T01:59:23.440664155Z" level=info msg="TearDown network for sandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" successfully" Nov 1 01:59:23.443792 containerd[1626]: time="2025-11-01T01:59:23.443752106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:59:23.443870 containerd[1626]: time="2025-11-01T01:59:23.443836679Z" level=info msg="RemovePodSandbox \"b05aff0f93d640650a21db2fb9c080c697f5249f19ca69f540a677174973fa35\" returns successfully" Nov 1 01:59:24.738447 containerd[1626]: time="2025-11-01T01:59:24.738284807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:59:25.059367 containerd[1626]: time="2025-11-01T01:59:25.059108926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:25.060580 containerd[1626]: time="2025-11-01T01:59:25.060510172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:59:25.060732 containerd[1626]: time="2025-11-01T01:59:25.060556743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:59:25.061172 kubelet[2853]: E1101 01:59:25.060910 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:25.061172 kubelet[2853]: E1101 01:59:25.060996 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:25.061971 kubelet[2853]: E1101 01:59:25.061777 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdj45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d498fd-4kbzk_calico-system(fdce623b-f498-4a86-b9d7-a71f9568f87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:25.063347 kubelet[2853]: E1101 01:59:25.063258 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:59:29.744679 kubelet[2853]: E1101 01:59:29.744562 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 01:59:33.739849 kubelet[2853]: E1101 01:59:33.739655 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:34.738869 kubelet[2853]: E1101 01:59:34.738165 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:59:34.750410 kubelet[2853]: E1101 01:59:34.749853 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:59:37.740223 kubelet[2853]: E1101 01:59:37.739993 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:59:39.741218 kubelet[2853]: E1101 01:59:39.740611 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:59:43.740343 containerd[1626]: time="2025-11-01T01:59:43.740295769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:59:44.057913 containerd[1626]: time="2025-11-01T01:59:44.057679230Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:44.059801 containerd[1626]: time="2025-11-01T01:59:44.059484912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:59:44.059801 containerd[1626]: time="2025-11-01T01:59:44.059695724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:59:44.060874 kubelet[2853]: E1101 01:59:44.060489 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:44.060874 kubelet[2853]: E1101 01:59:44.060655 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:44.064103 kubelet[2853]: E1101 01:59:44.063909 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9caf0efd36747098a83dd07c388322c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:44.067602 containerd[1626]: time="2025-11-01T01:59:44.066758266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:59:44.394094 containerd[1626]: time="2025-11-01T01:59:44.393889924Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:44.394856 containerd[1626]: time="2025-11-01T01:59:44.394757232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:59:44.394993 containerd[1626]: time="2025-11-01T01:59:44.394954183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:59:44.395583 kubelet[2853]: E1101 01:59:44.395289 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:44.395583 kubelet[2853]: E1101 01:59:44.395359 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:44.395583 kubelet[2853]: E1101 01:59:44.395529 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:44.398618 kubelet[2853]: E1101 01:59:44.397683 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 01:59:45.740917 containerd[1626]: time="2025-11-01T01:59:45.740879492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:59:46.071651 containerd[1626]: time="2025-11-01T01:59:46.070813622Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:46.072239 containerd[1626]: time="2025-11-01T01:59:46.072133671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:59:46.072372 containerd[1626]: time="2025-11-01T01:59:46.072301272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:59:46.072758 kubelet[2853]: E1101 01:59:46.072590 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:46.072758 kubelet[2853]: E1101 01:59:46.072682 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:46.074232 kubelet[2853]: E1101 01:59:46.072924 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:46.078980 containerd[1626]: time="2025-11-01T01:59:46.078842437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:59:46.406789 containerd[1626]: time="2025-11-01T01:59:46.406696930Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:46.408467 containerd[1626]: time="2025-11-01T01:59:46.408385356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:59:46.408642 containerd[1626]: time="2025-11-01T01:59:46.408574645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:59:46.408972 kubelet[2853]: E1101 01:59:46.408901 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:46.409183 kubelet[2853]: E1101 01:59:46.409003 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:46.412056 kubelet[2853]: E1101 01:59:46.411742 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:46.413916 kubelet[2853]: E1101 01:59:46.413804 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:59:46.743687 containerd[1626]: time="2025-11-01T01:59:46.743297774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:59:47.052655 containerd[1626]: time="2025-11-01T01:59:47.052251090Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:47.053762 containerd[1626]: time="2025-11-01T01:59:47.053607007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:59:47.053762 containerd[1626]: time="2025-11-01T01:59:47.053690751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:47.054021 kubelet[2853]: E1101 01:59:47.053958 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:47.054123 kubelet[2853]: E1101 01:59:47.054039 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:47.056447 kubelet[2853]: E1101 01:59:47.056353 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27q6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qxr6w_calico-system(7d2ab813-9398-4622-9019-515028818713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:47.057594 kubelet[2853]: E1101 01:59:47.057558 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 01:59:48.737786 containerd[1626]: time="2025-11-01T01:59:48.737679541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:49.072434 containerd[1626]: time="2025-11-01T01:59:49.072280992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:49.073362 containerd[1626]: time="2025-11-01T01:59:49.073293561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:49.073465 containerd[1626]: time="2025-11-01T01:59:49.073315332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:49.075414 kubelet[2853]: E1101 01:59:49.075346 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:49.075828 kubelet[2853]: E1101 01:59:49.075429 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:49.075931 kubelet[2853]: E1101 01:59:49.075774 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8xmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-8r8qj_calico-apiserver(53035908-eec7-4eef-b118-526472e0fe2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:49.078321 kubelet[2853]: E1101 01:59:49.077068 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:50.740504 containerd[1626]: time="2025-11-01T01:59:50.740374975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:51.063768 containerd[1626]: time="2025-11-01T01:59:51.063281565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:51.064562 containerd[1626]: time="2025-11-01T01:59:51.064166238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:51.064562 containerd[1626]: time="2025-11-01T01:59:51.064278865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:59:51.064678 kubelet[2853]: E1101 01:59:51.064473 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:51.064678 kubelet[2853]: E1101 01:59:51.064535 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:51.065562 kubelet[2853]: E1101 01:59:51.064701 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr874,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-tnvl5_calico-apiserver(ff93fa77-947d-41bd-9b0a-6912cba460eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:51.067168 kubelet[2853]: E1101 01:59:51.066206 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 01:59:51.371531 systemd[1]: Started sshd@7-10.244.90.154:22-147.75.109.163:54142.service - OpenSSH per-connection server daemon (147.75.109.163:54142). Nov 1 01:59:52.338192 sshd[5622]: Accepted publickey for core from 147.75.109.163 port 54142 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:59:52.342519 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:59:52.378644 systemd-logind[1594]: New session 10 of user core. Nov 1 01:59:52.384803 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 01:59:53.740346 containerd[1626]: time="2025-11-01T01:59:53.738629285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:59:53.778048 sshd[5622]: pam_unix(sshd:session): session closed for user core Nov 1 01:59:53.790287 systemd[1]: sshd@7-10.244.90.154:22-147.75.109.163:54142.service: Deactivated successfully. Nov 1 01:59:53.797510 systemd-logind[1594]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:59:53.805682 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:59:53.810815 systemd-logind[1594]: Removed session 10. Nov 1 01:59:54.085117 containerd[1626]: time="2025-11-01T01:59:54.084013018Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:54.087280 containerd[1626]: time="2025-11-01T01:59:54.085898579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:59:54.087280 containerd[1626]: time="2025-11-01T01:59:54.086017821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:59:54.089607 kubelet[2853]: E1101 01:59:54.087290 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:54.089607 kubelet[2853]: E1101 01:59:54.087404 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:54.089607 kubelet[2853]: E1101 01:59:54.087726 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdj45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d498fd-4kbzk_calico-system(fdce623b-f498-4a86-b9d7-a71f9568f87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:54.089607 kubelet[2853]: E1101 01:59:54.089314 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 01:59:55.741003 kubelet[2853]: E1101 01:59:55.740873 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 01:59:58.740740 kubelet[2853]: E1101 01:59:58.740679 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 01:59:58.930446 systemd[1]: Started sshd@8-10.244.90.154:22-147.75.109.163:54158.service - OpenSSH per-connection server daemon (147.75.109.163:54158). Nov 1 01:59:59.738964 kubelet[2853]: E1101 01:59:59.738891 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 01:59:59.872866 sshd[5641]: Accepted publickey for core from 147.75.109.163 port 54158 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:59:59.874978 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:59:59.883083 systemd-logind[1594]: New session 11 of user core. Nov 1 01:59:59.890401 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 02:00:00.722623 sshd[5641]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:00.730354 systemd-logind[1594]: Session 11 logged out. Waiting for processes to exit. Nov 1 02:00:00.731277 systemd[1]: sshd@8-10.244.90.154:22-147.75.109.163:54158.service: Deactivated successfully. Nov 1 02:00:00.738910 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 02:00:00.741592 kubelet[2853]: E1101 02:00:00.740935 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 02:00:00.744205 systemd-logind[1594]: Removed session 11. Nov 1 02:00:04.740205 kubelet[2853]: E1101 02:00:04.738319 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 02:00:05.742984 kubelet[2853]: E1101 02:00:05.742261 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 02:00:05.875491 systemd[1]: Started sshd@9-10.244.90.154:22-147.75.109.163:50550.service - OpenSSH per-connection server daemon (147.75.109.163:50550). Nov 1 02:00:06.807526 sshd[5678]: Accepted publickey for core from 147.75.109.163 port 50550 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:06.810735 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:06.819484 systemd-logind[1594]: New session 12 of user core. Nov 1 02:00:06.826559 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 02:00:07.634488 sshd[5678]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:07.647415 systemd[1]: sshd@9-10.244.90.154:22-147.75.109.163:50550.service: Deactivated successfully. Nov 1 02:00:07.652329 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 02:00:07.653531 systemd-logind[1594]: Session 12 logged out. Waiting for processes to exit. Nov 1 02:00:07.655318 systemd-logind[1594]: Removed session 12. Nov 1 02:00:07.791270 systemd[1]: Started sshd@10-10.244.90.154:22-147.75.109.163:50560.service - OpenSSH per-connection server daemon (147.75.109.163:50560). Nov 1 02:00:08.705434 sshd[5693]: Accepted publickey for core from 147.75.109.163 port 50560 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:08.709972 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:08.721332 systemd-logind[1594]: New session 13 of user core. Nov 1 02:00:08.726515 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 02:00:09.586977 sshd[5693]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:09.595327 systemd[1]: sshd@10-10.244.90.154:22-147.75.109.163:50560.service: Deactivated successfully. Nov 1 02:00:09.602481 systemd-logind[1594]: Session 13 logged out. Waiting for processes to exit. Nov 1 02:00:09.604237 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 02:00:09.607693 systemd-logind[1594]: Removed session 13. Nov 1 02:00:09.745603 kubelet[2853]: E1101 02:00:09.745538 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 02:00:09.746453 systemd[1]: Started sshd@11-10.244.90.154:22-147.75.109.163:50562.service - OpenSSH per-connection server daemon (147.75.109.163:50562). Nov 1 02:00:10.685469 sshd[5705]: Accepted publickey for core from 147.75.109.163 port 50562 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:10.694889 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:10.705340 systemd-logind[1594]: New session 14 of user core. Nov 1 02:00:10.711472 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 02:00:10.763921 kubelet[2853]: E1101 02:00:10.763864 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 02:00:10.765398 kubelet[2853]: E1101 02:00:10.763946 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 02:00:11.453240 sshd[5705]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:11.468586 systemd[1]: sshd@11-10.244.90.154:22-147.75.109.163:50562.service: Deactivated successfully. Nov 1 02:00:11.479395 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 02:00:11.482400 systemd-logind[1594]: Session 14 logged out. Waiting for processes to exit. Nov 1 02:00:11.484867 systemd-logind[1594]: Removed session 14. Nov 1 02:00:15.737854 kubelet[2853]: E1101 02:00:15.737511 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 02:00:16.610223 systemd[1]: Started sshd@12-10.244.90.154:22-147.75.109.163:36396.service - OpenSSH per-connection server daemon (147.75.109.163:36396). Nov 1 02:00:17.534735 sshd[5719]: Accepted publickey for core from 147.75.109.163 port 36396 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:17.536802 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:17.543340 systemd-logind[1594]: New session 15 of user core. Nov 1 02:00:17.548401 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 02:00:17.741574 kubelet[2853]: E1101 02:00:17.738558 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 02:00:18.386012 sshd[5719]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:18.398754 systemd[1]: sshd@12-10.244.90.154:22-147.75.109.163:36396.service: Deactivated successfully. Nov 1 02:00:18.411376 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 02:00:18.412531 systemd-logind[1594]: Session 15 logged out. Waiting for processes to exit. Nov 1 02:00:18.414520 systemd-logind[1594]: Removed session 15. Nov 1 02:00:19.742604 kubelet[2853]: E1101 02:00:19.742211 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 02:00:22.742267 kubelet[2853]: E1101 02:00:22.742182 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 02:00:23.542438 systemd[1]: Started sshd@13-10.244.90.154:22-147.75.109.163:56948.service - OpenSSH per-connection server daemon (147.75.109.163:56948). Nov 1 02:00:23.746173 kubelet[2853]: E1101 02:00:23.742421 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 02:00:24.503174 sshd[5741]: Accepted publickey for core from 147.75.109.163 port 56948 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:24.505269 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:24.519650 systemd-logind[1594]: New session 16 of user core. Nov 1 02:00:24.526356 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 02:00:24.738980 kubelet[2853]: E1101 02:00:24.738772 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 02:00:25.402746 sshd[5741]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:25.421815 systemd[1]: sshd@13-10.244.90.154:22-147.75.109.163:56948.service: Deactivated successfully. Nov 1 02:00:25.422295 systemd-logind[1594]: Session 16 logged out. Waiting for processes to exit. Nov 1 02:00:25.429444 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 02:00:25.433832 systemd-logind[1594]: Removed session 16. Nov 1 02:00:26.740491 kubelet[2853]: E1101 02:00:26.739472 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 02:00:30.559629 systemd[1]: Started sshd@14-10.244.90.154:22-147.75.109.163:37332.service - OpenSSH per-connection server daemon (147.75.109.163:37332). Nov 1 02:00:31.494300 sshd[5757]: Accepted publickey for core from 147.75.109.163 port 37332 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:31.497057 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:31.510106 systemd-logind[1594]: New session 17 of user core. Nov 1 02:00:31.520414 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 02:00:31.741561 containerd[1626]: time="2025-11-01T02:00:31.740622238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:00:32.259842 sshd[5757]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:32.271178 systemd-logind[1594]: Session 17 logged out. Waiting for processes to exit. Nov 1 02:00:32.272875 systemd[1]: sshd@14-10.244.90.154:22-147.75.109.163:37332.service: Deactivated successfully. Nov 1 02:00:32.282949 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 02:00:32.284628 systemd-logind[1594]: Removed session 17. Nov 1 02:00:32.369807 containerd[1626]: time="2025-11-01T02:00:32.369046665Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:32.373157 containerd[1626]: time="2025-11-01T02:00:32.371859554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:00:32.373157 containerd[1626]: time="2025-11-01T02:00:32.371909730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:00:32.373579 kubelet[2853]: E1101 02:00:32.373518 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:32.374022 kubelet[2853]: E1101 02:00:32.373612 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:32.374022 kubelet[2853]: E1101 02:00:32.373841 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr874,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-tnvl5_calico-apiserver(ff93fa77-947d-41bd-9b0a-6912cba460eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:32.376158 kubelet[2853]: E1101 02:00:32.375043 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 02:00:32.418755 systemd[1]: Started sshd@15-10.244.90.154:22-147.75.109.163:37342.service - OpenSSH per-connection server daemon (147.75.109.163:37342). Nov 1 02:00:33.378745 sshd[5779]: Accepted publickey for core from 147.75.109.163 port 37342 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:33.381285 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:33.392559 systemd-logind[1594]: New session 18 of user core. Nov 1 02:00:33.396469 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 02:00:33.739405 kubelet[2853]: E1101 02:00:33.738290 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 02:00:34.545657 sshd[5779]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:34.556199 systemd-logind[1594]: Session 18 logged out. Waiting for processes to exit. Nov 1 02:00:34.556577 systemd[1]: sshd@15-10.244.90.154:22-147.75.109.163:37342.service: Deactivated successfully. Nov 1 02:00:34.559565 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 02:00:34.560409 systemd-logind[1594]: Removed session 18. Nov 1 02:00:34.692690 systemd[1]: Started sshd@16-10.244.90.154:22-147.75.109.163:37348.service - OpenSSH per-connection server daemon (147.75.109.163:37348). Nov 1 02:00:35.638519 sshd[5791]: Accepted publickey for core from 147.75.109.163 port 37348 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:35.646736 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:35.657681 systemd-logind[1594]: New session 19 of user core. Nov 1 02:00:35.663602 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 02:00:35.773788 containerd[1626]: time="2025-11-01T02:00:35.742382908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:00:36.128408 containerd[1626]: time="2025-11-01T02:00:36.128303686Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:36.130594 containerd[1626]: time="2025-11-01T02:00:36.129473331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:00:36.130594 containerd[1626]: time="2025-11-01T02:00:36.129577773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 02:00:36.136796 kubelet[2853]: E1101 02:00:36.130800 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:00:36.136796 kubelet[2853]: E1101 02:00:36.131247 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:00:36.136796 kubelet[2853]: E1101 02:00:36.131811 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9caf0efd36747098a83dd07c388322c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:36.139855 containerd[1626]: time="2025-11-01T02:00:36.136817890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:00:36.473907 containerd[1626]: time="2025-11-01T02:00:36.473370379Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:36.475453 containerd[1626]: time="2025-11-01T02:00:36.475108170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:00:36.475453 containerd[1626]: time="2025-11-01T02:00:36.475317356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 02:00:36.476767 kubelet[2853]: E1101 02:00:36.476271 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:00:36.476767 kubelet[2853]: E1101 02:00:36.476385 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:00:36.476767 kubelet[2853]: E1101 02:00:36.476632 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgh4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcc756c94-8k58z_calico-system(62809712-0e36-4839-9d03-798eca9b1c78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:36.479401 kubelet[2853]: E1101 02:00:36.479315 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 02:00:36.749422 containerd[1626]: time="2025-11-01T02:00:36.749149589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:00:37.106241 containerd[1626]: time="2025-11-01T02:00:37.105648861Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:37.110343 containerd[1626]: time="2025-11-01T02:00:37.107635998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:00:37.110343 containerd[1626]: time="2025-11-01T02:00:37.109119071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:00:37.110547 kubelet[2853]: E1101 02:00:37.109340 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:37.110547 kubelet[2853]: E1101 02:00:37.109400 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:37.110547 kubelet[2853]: E1101 02:00:37.109559 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8xmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7589849df-8r8qj_calico-apiserver(53035908-eec7-4eef-b118-526472e0fe2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:37.111359 kubelet[2853]: E1101 02:00:37.111316 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 02:00:37.397755 sshd[5791]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:37.413070 systemd[1]: sshd@16-10.244.90.154:22-147.75.109.163:37348.service: Deactivated successfully. Nov 1 02:00:37.420469 systemd-logind[1594]: Session 19 logged out. Waiting for processes to exit. Nov 1 02:00:37.421021 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 02:00:37.423266 systemd-logind[1594]: Removed session 19. Nov 1 02:00:37.563387 systemd[1]: Started sshd@17-10.244.90.154:22-147.75.109.163:37362.service - OpenSSH per-connection server daemon (147.75.109.163:37362). Nov 1 02:00:37.740748 containerd[1626]: time="2025-11-01T02:00:37.740577748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 02:00:38.102871 containerd[1626]: time="2025-11-01T02:00:38.102683070Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:38.104533 containerd[1626]: time="2025-11-01T02:00:38.103643520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 02:00:38.104533 containerd[1626]: time="2025-11-01T02:00:38.103691992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 02:00:38.105375 kubelet[2853]: E1101 02:00:38.103946 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:00:38.105375 kubelet[2853]: E1101 02:00:38.104024 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:00:38.105375 kubelet[2853]: E1101 02:00:38.104399 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27q6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qxr6w_calico-system(7d2ab813-9398-4622-9019-515028818713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:38.107764 containerd[1626]: time="2025-11-01T02:00:38.107380179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:00:38.109260 kubelet[2853]: E1101 02:00:38.107817 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 02:00:38.444240 containerd[1626]: time="2025-11-01T02:00:38.444180965Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:38.448163 containerd[1626]: time="2025-11-01T02:00:38.446473711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:00:38.448163 containerd[1626]: time="2025-11-01T02:00:38.446582962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 02:00:38.448977 kubelet[2853]: E1101 02:00:38.448482 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:00:38.448977 kubelet[2853]: E1101 02:00:38.448557 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:00:38.448977 kubelet[2853]: E1101 02:00:38.448731 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:38.452187 containerd[1626]: time="2025-11-01T02:00:38.450943901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:00:38.494175 sshd[5832]: Accepted publickey for core from 147.75.109.163 port 37362 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:38.498876 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:38.517906 systemd-logind[1594]: New session 20 of user core. Nov 1 02:00:38.524449 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 02:00:38.791744 containerd[1626]: time="2025-11-01T02:00:38.791555899Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:38.794158 containerd[1626]: time="2025-11-01T02:00:38.793497775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:00:38.794158 containerd[1626]: time="2025-11-01T02:00:38.793541814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 02:00:38.794801 kubelet[2853]: E1101 02:00:38.794724 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:00:38.794916 kubelet[2853]: E1101 02:00:38.794844 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:00:38.795560 kubelet[2853]: E1101 02:00:38.795116 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j25c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b5qvt_calico-system(4caf741f-c22d-4e76-9e9d-18f81ca6bba2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:38.797974 kubelet[2853]: E1101 02:00:38.797694 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 02:00:39.777578 sshd[5832]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:39.787305 systemd[1]: sshd@17-10.244.90.154:22-147.75.109.163:37362.service: Deactivated successfully. Nov 1 02:00:39.791792 systemd-logind[1594]: Session 20 logged out. Waiting for processes to exit. Nov 1 02:00:39.792655 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 02:00:39.797572 systemd-logind[1594]: Removed session 20. Nov 1 02:00:39.952581 systemd[1]: Started sshd@18-10.244.90.154:22-147.75.109.163:37372.service - OpenSSH per-connection server daemon (147.75.109.163:37372). Nov 1 02:00:40.860714 sshd[5844]: Accepted publickey for core from 147.75.109.163 port 37372 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:40.865542 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:40.880014 systemd-logind[1594]: New session 21 of user core. Nov 1 02:00:40.885483 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 02:00:41.676422 sshd[5844]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:41.685737 systemd[1]: sshd@18-10.244.90.154:22-147.75.109.163:37372.service: Deactivated successfully. Nov 1 02:00:41.695684 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 02:00:41.701592 systemd-logind[1594]: Session 21 logged out. Waiting for processes to exit. Nov 1 02:00:41.705659 systemd-logind[1594]: Removed session 21. Nov 1 02:00:43.742189 kubelet[2853]: E1101 02:00:43.741586 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 02:00:46.832345 systemd[1]: Started sshd@19-10.244.90.154:22-147.75.109.163:58944.service - OpenSSH per-connection server daemon (147.75.109.163:58944). Nov 1 02:00:47.743737 containerd[1626]: time="2025-11-01T02:00:47.743062675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:00:47.746638 kubelet[2853]: E1101 02:00:47.746051 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 02:00:47.810476 sshd[5880]: Accepted publickey for core from 147.75.109.163 port 58944 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:47.810015 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:47.824715 systemd-logind[1594]: New session 22 of user core. Nov 1 02:00:47.830472 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 02:00:48.100674 containerd[1626]: time="2025-11-01T02:00:48.100452135Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:48.101948 containerd[1626]: time="2025-11-01T02:00:48.101605620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:00:48.101948 containerd[1626]: time="2025-11-01T02:00:48.101620632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 02:00:48.102749 kubelet[2853]: E1101 02:00:48.101989 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:00:48.102749 kubelet[2853]: E1101 02:00:48.102068 2853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:00:48.102749 kubelet[2853]: E1101 02:00:48.102324 2853 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdj45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d498fd-4kbzk_calico-system(fdce623b-f498-4a86-b9d7-a71f9568f87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:48.103993 kubelet[2853]: E1101 02:00:48.103777 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 02:00:48.595882 sshd[5880]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:48.601034 systemd-logind[1594]: Session 22 logged out. Waiting for processes to exit. Nov 1 02:00:48.602150 systemd[1]: sshd@19-10.244.90.154:22-147.75.109.163:58944.service: Deactivated successfully. Nov 1 02:00:48.616100 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 02:00:48.618828 systemd-logind[1594]: Removed session 22. Nov 1 02:00:51.741244 kubelet[2853]: E1101 02:00:51.740788 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 02:00:51.751862 kubelet[2853]: E1101 02:00:51.751805 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2" Nov 1 02:00:53.740359 kubelet[2853]: E1101 02:00:53.739337 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 02:00:53.762523 systemd[1]: Started sshd@20-10.244.90.154:22-147.75.109.163:50028.service - OpenSSH per-connection server daemon (147.75.109.163:50028). Nov 1 02:00:54.729194 sshd[5894]: Accepted publickey for core from 147.75.109.163 port 50028 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:00:54.740761 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:00:54.752218 systemd-logind[1594]: New session 23 of user core. Nov 1 02:00:54.760182 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 02:00:55.661953 sshd[5894]: pam_unix(sshd:session): session closed for user core Nov 1 02:00:55.670557 systemd-logind[1594]: Session 23 logged out. Waiting for processes to exit. Nov 1 02:00:55.672370 systemd[1]: sshd@20-10.244.90.154:22-147.75.109.163:50028.service: Deactivated successfully. Nov 1 02:00:55.688576 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 02:00:55.690446 systemd-logind[1594]: Removed session 23. Nov 1 02:00:57.754924 kubelet[2853]: E1101 02:00:57.754464 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-tnvl5" podUID="ff93fa77-947d-41bd-9b0a-6912cba460eb" Nov 1 02:01:00.817420 systemd[1]: Started sshd@21-10.244.90.154:22-147.75.109.163:59108.service - OpenSSH per-connection server daemon (147.75.109.163:59108). Nov 1 02:01:01.740920 kubelet[2853]: E1101 02:01:01.740586 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d498fd-4kbzk" podUID="fdce623b-f498-4a86-b9d7-a71f9568f87d" Nov 1 02:01:01.744290 kubelet[2853]: E1101 02:01:01.744232 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcc756c94-8k58z" podUID="62809712-0e36-4839-9d03-798eca9b1c78" Nov 1 02:01:01.764431 sshd[5911]: Accepted publickey for core from 147.75.109.163 port 59108 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:01:01.773924 sshd[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:01:01.793497 systemd-logind[1594]: New session 24 of user core. Nov 1 02:01:01.799000 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 02:01:02.618437 sshd[5911]: pam_unix(sshd:session): session closed for user core Nov 1 02:01:02.630447 systemd[1]: sshd@21-10.244.90.154:22-147.75.109.163:59108.service: Deactivated successfully. Nov 1 02:01:02.640632 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 02:01:02.642406 systemd-logind[1594]: Session 24 logged out. Waiting for processes to exit. Nov 1 02:01:02.643376 systemd-logind[1594]: Removed session 24. Nov 1 02:01:02.737844 kubelet[2853]: E1101 02:01:02.737782 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7589849df-8r8qj" podUID="53035908-eec7-4eef-b118-526472e0fe2d" Nov 1 02:01:05.255172 systemd[1]: run-containerd-runc-k8s.io-823c94cc33e0765c881888e9aca563be56d4f5bfe2eae679594ab5f5fd74f123-runc.HMhhWB.mount: Deactivated successfully. Nov 1 02:01:05.741684 kubelet[2853]: E1101 02:01:05.741467 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qxr6w" podUID="7d2ab813-9398-4622-9019-515028818713" Nov 1 02:01:06.739798 kubelet[2853]: E1101 02:01:06.739633 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b5qvt" podUID="4caf741f-c22d-4e76-9e9d-18f81ca6bba2"