Sep 9 06:57:00.976748 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 06:57:00.976785 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 06:57:00.976803 kernel: BIOS-provided physical RAM map: Sep 9 06:57:00.976813 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 06:57:00.976822 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 06:57:00.976832 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 06:57:00.976843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 9 06:57:00.976862 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 9 06:57:00.976873 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 06:57:00.976906 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 06:57:00.976928 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 06:57:00.976939 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 06:57:00.976949 kernel: NX (Execute Disable) protection: active Sep 9 06:57:00.976979 kernel: APIC: Static calls initialized Sep 9 06:57:00.976991 kernel: SMBIOS 2.8 present. Sep 9 06:57:00.977003 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 9 06:57:00.977027 kernel: DMI: Memory slots populated: 1/1 Sep 9 06:57:00.977038 kernel: Hypervisor detected: KVM Sep 9 06:57:00.977049 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 06:57:00.977060 kernel: kvm-clock: using sched offset of 6970197641 cycles Sep 9 06:57:00.977071 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 06:57:00.977082 kernel: tsc: Detected 2799.998 MHz processor Sep 9 06:57:00.977093 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 06:57:00.977104 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 06:57:00.977115 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 9 06:57:00.977131 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 06:57:00.977142 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 06:57:00.977153 kernel: Using GB pages for direct mapping Sep 9 06:57:00.977164 kernel: ACPI: Early table checksum verification disabled Sep 9 06:57:00.977174 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 9 06:57:00.977185 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977196 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977207 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977217 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 9 06:57:00.977233 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977243 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977254 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977265 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 06:57:00.977276 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 9 06:57:00.977287 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 9 06:57:00.977303 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 9 06:57:00.977318 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 9 06:57:00.977330 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 9 06:57:00.977341 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 9 06:57:00.977352 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 9 06:57:00.977363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 06:57:00.977375 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 06:57:00.977386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 9 06:57:00.977436 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Sep 9 06:57:00.977450 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Sep 9 06:57:00.977462 kernel: Zone ranges: Sep 9 06:57:00.977473 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 06:57:00.977484 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 9 06:57:00.977526 kernel: Normal empty Sep 9 06:57:00.977543 kernel: Device empty Sep 9 06:57:00.977554 kernel: Movable zone start for each node Sep 9 06:57:00.977566 kernel: Early memory node ranges Sep 9 06:57:00.977577 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 06:57:00.977594 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 9 06:57:00.977606 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 9 06:57:00.977617 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 06:57:00.977628 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 06:57:00.977639 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 9 06:57:00.977657 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 06:57:00.977669 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 06:57:00.977684 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 06:57:00.977696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 06:57:00.977713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 06:57:00.977724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 06:57:00.977742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 06:57:00.977753 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 06:57:00.977764 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 06:57:00.977775 kernel: TSC deadline timer available Sep 9 06:57:00.977786 kernel: CPU topo: Max. logical packages: 16 Sep 9 06:57:00.977797 kernel: CPU topo: Max. logical dies: 16 Sep 9 06:57:00.977808 kernel: CPU topo: Max. dies per package: 1 Sep 9 06:57:00.977824 kernel: CPU topo: Max. threads per core: 1 Sep 9 06:57:00.977835 kernel: CPU topo: Num. cores per package: 1 Sep 9 06:57:00.977847 kernel: CPU topo: Num. threads per package: 1 Sep 9 06:57:00.977858 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Sep 9 06:57:00.977869 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 06:57:00.977880 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 06:57:00.977902 kernel: Booting paravirtualized kernel on KVM Sep 9 06:57:00.977913 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 06:57:00.977925 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 9 06:57:00.977942 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 9 06:57:00.977965 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 9 06:57:00.977978 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 9 06:57:00.977989 kernel: kvm-guest: PV spinlocks enabled Sep 9 06:57:00.978000 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 06:57:00.978037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 06:57:00.978050 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 06:57:00.978061 kernel: random: crng init done Sep 9 06:57:00.978079 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 06:57:00.978090 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 06:57:00.978101 kernel: Fallback order for Node 0: 0 Sep 9 06:57:00.978113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Sep 9 06:57:00.978124 kernel: Policy zone: DMA32 Sep 9 06:57:00.978135 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 06:57:00.978146 kernel: software IO TLB: area num 16. Sep 9 06:57:00.978158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 9 06:57:00.978169 kernel: Kernel/User page tables isolation: enabled Sep 9 06:57:00.978209 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 06:57:00.978221 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 06:57:00.978232 kernel: Dynamic Preempt: voluntary Sep 9 06:57:00.978269 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 06:57:00.978288 kernel: rcu: RCU event tracing is enabled. Sep 9 06:57:00.978301 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 9 06:57:00.978313 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 06:57:00.978351 kernel: Rude variant of Tasks RCU enabled. Sep 9 06:57:00.978366 kernel: Tracing variant of Tasks RCU enabled. Sep 9 06:57:00.978384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 06:57:00.978395 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 9 06:57:00.978407 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 06:57:00.978418 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 06:57:00.978430 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 06:57:00.978441 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 9 06:57:00.978452 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 06:57:00.978477 kernel: Console: colour VGA+ 80x25 Sep 9 06:57:00.978489 kernel: printk: legacy console [tty0] enabled Sep 9 06:57:00.978506 kernel: printk: legacy console [ttyS0] enabled Sep 9 06:57:00.978519 kernel: ACPI: Core revision 20240827 Sep 9 06:57:00.978531 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 06:57:00.978547 kernel: x2apic enabled Sep 9 06:57:00.978559 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 06:57:00.978571 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 9 06:57:00.978583 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Sep 9 06:57:00.978600 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 06:57:00.978612 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 06:57:00.978646 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 06:57:00.978659 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 06:57:00.978671 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 06:57:00.978682 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 06:57:00.978694 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 9 06:57:00.978706 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 06:57:00.978717 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 06:57:00.978729 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 06:57:00.978741 kernel: MMIO Stale Data: Unknown: No mitigations Sep 9 06:57:00.978758 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 06:57:00.978770 kernel: active return thunk: its_return_thunk Sep 9 06:57:00.978782 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 06:57:00.978794 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 06:57:00.978805 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 06:57:00.978817 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 06:57:00.978828 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 06:57:00.978840 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 06:57:00.978852 kernel: Freeing SMP alternatives memory: 32K Sep 9 06:57:00.978864 kernel: pid_max: default: 32768 minimum: 301 Sep 9 06:57:00.978875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 06:57:00.978901 kernel: landlock: Up and running. Sep 9 06:57:00.978914 kernel: SELinux: Initializing. Sep 9 06:57:00.978925 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 06:57:00.978937 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 06:57:00.978949 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 9 06:57:00.980073 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 9 06:57:00.980087 kernel: signal: max sigframe size: 1776 Sep 9 06:57:00.980105 kernel: rcu: Hierarchical SRCU implementation. Sep 9 06:57:00.980119 kernel: rcu: Max phase no-delay instances is 400. Sep 9 06:57:00.980131 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Sep 9 06:57:00.980150 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 06:57:00.980162 kernel: smp: Bringing up secondary CPUs ... Sep 9 06:57:00.980174 kernel: smpboot: x86: Booting SMP configuration: Sep 9 06:57:00.980185 kernel: .... node #0, CPUs: #1 Sep 9 06:57:00.980197 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 06:57:00.980209 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Sep 9 06:57:00.980222 kernel: Memory: 1895688K/2096616K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 194920K reserved, 0K cma-reserved) Sep 9 06:57:00.980234 kernel: devtmpfs: initialized Sep 9 06:57:00.980246 kernel: x86/mm: Memory block size: 128MB Sep 9 06:57:00.980262 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 06:57:00.980274 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 9 06:57:00.980286 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 06:57:00.980298 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 06:57:00.980309 kernel: audit: initializing netlink subsys (disabled) Sep 9 06:57:00.980321 kernel: audit: type=2000 audit(1757401017.689:1): state=initialized audit_enabled=0 res=1 Sep 9 06:57:00.980333 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 06:57:00.980345 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 06:57:00.980357 kernel: cpuidle: using governor menu Sep 9 06:57:00.980373 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 06:57:00.980384 kernel: dca service started, version 1.12.1 Sep 9 06:57:00.980396 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 9 06:57:00.980408 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 06:57:00.980420 kernel: PCI: Using configuration type 1 for base access Sep 9 06:57:00.980432 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 06:57:00.980444 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 06:57:00.980456 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 06:57:00.980467 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 06:57:00.980484 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 06:57:00.980496 kernel: ACPI: Added _OSI(Module Device) Sep 9 06:57:00.980508 kernel: ACPI: Added _OSI(Processor Device) Sep 9 06:57:00.980519 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 06:57:00.980531 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 06:57:00.980543 kernel: ACPI: Interpreter enabled Sep 9 06:57:00.980555 kernel: ACPI: PM: (supports S0 S5) Sep 9 06:57:00.980566 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 06:57:00.980578 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 06:57:00.980594 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 06:57:00.980606 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 06:57:00.980618 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 06:57:00.984140 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 06:57:00.984370 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 06:57:00.984539 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 06:57:00.984558 kernel: PCI host bridge to bus 0000:00 Sep 9 06:57:00.984755 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 06:57:00.984922 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 06:57:00.985107 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 06:57:00.985255 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 9 06:57:00.985403 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 06:57:00.985548 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 9 06:57:00.985694 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 06:57:00.985948 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 06:57:00.986232 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Sep 9 06:57:00.986399 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Sep 9 06:57:00.986559 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Sep 9 06:57:00.986719 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Sep 9 06:57:00.986927 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 06:57:00.987143 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.987316 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Sep 9 06:57:00.987477 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 06:57:00.987637 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 06:57:00.987819 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 06:57:00.992085 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.992266 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Sep 9 06:57:00.992443 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 06:57:00.992609 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 06:57:00.992773 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 06:57:00.993000 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.993170 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Sep 9 06:57:00.993332 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 06:57:00.993492 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 06:57:00.993661 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 06:57:00.993915 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.994138 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Sep 9 06:57:00.994453 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 06:57:00.994741 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 06:57:00.994918 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 06:57:00.997162 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.997345 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Sep 9 06:57:00.997510 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 06:57:00.997673 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 06:57:00.997865 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 06:57:00.998103 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.998290 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Sep 9 06:57:00.998453 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 06:57:00.998644 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 06:57:00.998821 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 06:57:00.999046 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:00.999212 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Sep 9 06:57:00.999374 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 06:57:00.999548 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 06:57:00.999709 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 06:57:01.000402 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 06:57:01.000576 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Sep 9 06:57:01.000738 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 06:57:01.000921 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 06:57:01.001114 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 06:57:01.001312 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 06:57:01.001477 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Sep 9 06:57:01.001647 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Sep 9 06:57:01.001808 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 9 06:57:01.004999 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Sep 9 06:57:01.005234 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 06:57:01.005410 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Sep 9 06:57:01.005583 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Sep 9 06:57:01.005748 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Sep 9 06:57:01.006011 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 06:57:01.006178 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 06:57:01.006370 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 06:57:01.006535 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Sep 9 06:57:01.006696 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Sep 9 06:57:01.007020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 06:57:01.007195 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 9 06:57:01.007401 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Sep 9 06:57:01.007571 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Sep 9 06:57:01.007738 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 06:57:01.007941 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 06:57:01.008128 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 06:57:01.008336 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 06:57:01.008536 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Sep 9 06:57:01.008709 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Sep 9 06:57:01.008875 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 06:57:01.009461 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 9 06:57:01.009635 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Sep 9 06:57:01.009801 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 06:57:01.011059 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 9 06:57:01.011245 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 9 06:57:01.011414 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 06:57:01.011581 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 06:57:01.011748 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 06:57:01.011928 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 06:57:01.012177 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 06:57:01.012358 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 06:57:01.012379 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 06:57:01.012391 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 06:57:01.012404 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 06:57:01.012416 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 06:57:01.012428 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 06:57:01.012441 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 06:57:01.012453 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 06:57:01.012465 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 06:57:01.012483 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 06:57:01.012495 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 06:57:01.012507 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 06:57:01.012519 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 06:57:01.012531 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 06:57:01.012543 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 06:57:01.012555 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 06:57:01.012567 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 06:57:01.012579 kernel: iommu: Default domain type: Translated Sep 9 06:57:01.012596 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 06:57:01.012608 kernel: PCI: Using ACPI for IRQ routing Sep 9 06:57:01.012620 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 06:57:01.012632 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 06:57:01.012644 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 9 06:57:01.012805 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 06:57:01.014040 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 06:57:01.014212 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 06:57:01.014239 kernel: vgaarb: loaded Sep 9 06:57:01.014252 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 06:57:01.014264 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 06:57:01.014277 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 06:57:01.014289 kernel: pnp: PnP ACPI init Sep 9 06:57:01.014521 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 06:57:01.014542 kernel: pnp: PnP ACPI: found 5 devices Sep 9 06:57:01.014555 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 06:57:01.014574 kernel: NET: Registered PF_INET protocol family Sep 9 06:57:01.014586 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 06:57:01.014598 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 06:57:01.014610 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 06:57:01.014623 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 06:57:01.014634 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 06:57:01.014646 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 06:57:01.014658 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 06:57:01.014670 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 06:57:01.014694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 06:57:01.014712 kernel: NET: Registered PF_XDP protocol family Sep 9 06:57:01.014945 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 9 06:57:01.015160 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 9 06:57:01.015369 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 9 06:57:01.015637 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 9 06:57:01.015848 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 06:57:01.016041 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 06:57:01.016213 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 06:57:01.016382 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 06:57:01.016543 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 9 06:57:01.016711 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 9 06:57:01.018974 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 9 06:57:01.019159 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 9 06:57:01.019327 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 9 06:57:01.019491 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 9 06:57:01.019661 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 9 06:57:01.019822 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 9 06:57:01.020033 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 06:57:01.020248 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 06:57:01.020413 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 06:57:01.020607 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 9 06:57:01.020772 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 06:57:01.020947 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 06:57:01.021439 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 06:57:01.021614 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 9 06:57:01.021776 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 06:57:01.023981 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 06:57:01.024173 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 06:57:01.024341 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 9 06:57:01.024507 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 06:57:01.024669 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 06:57:01.024840 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 06:57:01.025069 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 9 06:57:01.025234 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 06:57:01.025397 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 06:57:01.025567 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 06:57:01.025728 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 9 06:57:01.025902 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 06:57:01.026130 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 06:57:01.026294 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 06:57:01.026454 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 9 06:57:01.026615 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 06:57:01.026775 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 06:57:01.026965 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 06:57:01.027136 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 9 06:57:01.027298 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 06:57:01.027480 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 06:57:01.027639 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 06:57:01.027799 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 9 06:57:01.030015 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 06:57:01.030203 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 06:57:01.030365 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 06:57:01.030516 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 06:57:01.030672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 06:57:01.030820 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 9 06:57:01.031007 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 06:57:01.031157 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 9 06:57:01.031325 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 9 06:57:01.031482 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 9 06:57:01.031635 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 06:57:01.031815 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 9 06:57:01.032015 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 9 06:57:01.032173 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 9 06:57:01.032326 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 06:57:01.032498 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 9 06:57:01.032650 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 9 06:57:01.032802 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 06:57:01.033696 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 9 06:57:01.033861 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 9 06:57:01.034059 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 06:57:01.034331 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 9 06:57:01.034490 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 9 06:57:01.034651 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 06:57:01.034814 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 9 06:57:01.035009 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 9 06:57:01.035166 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 06:57:01.035338 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 9 06:57:01.035493 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 9 06:57:01.035646 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 06:57:01.035839 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 9 06:57:01.036037 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 9 06:57:01.036192 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 06:57:01.036213 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 06:57:01.036226 kernel: PCI: CLS 0 bytes, default 64 Sep 9 06:57:01.036239 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 06:57:01.036252 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 9 06:57:01.036264 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 06:57:01.036277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 9 06:57:01.036290 kernel: Initialise system trusted keyrings Sep 9 06:57:01.036310 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 06:57:01.036322 kernel: Key type asymmetric registered Sep 9 06:57:01.036335 kernel: Asymmetric key parser 'x509' registered Sep 9 06:57:01.036347 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 06:57:01.036360 kernel: io scheduler mq-deadline registered Sep 9 06:57:01.036372 kernel: io scheduler kyber registered Sep 9 06:57:01.036385 kernel: io scheduler bfq registered Sep 9 06:57:01.036616 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 9 06:57:01.036851 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 9 06:57:01.037099 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.037268 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 9 06:57:01.037430 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 9 06:57:01.037592 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.037753 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 9 06:57:01.037926 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 9 06:57:01.038120 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.038284 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 9 06:57:01.038451 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 9 06:57:01.038614 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.038775 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 9 06:57:01.038972 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 9 06:57:01.039145 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.039307 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 9 06:57:01.039467 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 9 06:57:01.039628 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.039789 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 9 06:57:01.040032 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 9 06:57:01.040205 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.040392 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 9 06:57:01.040583 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 9 06:57:01.040748 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 06:57:01.040767 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 06:57:01.040781 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 06:57:01.040801 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 06:57:01.040814 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 06:57:01.040827 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 06:57:01.040840 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 06:57:01.040852 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 06:57:01.040865 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 06:57:01.040878 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 06:57:01.041146 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 9 06:57:01.041312 kernel: rtc_cmos 00:03: registered as rtc0 Sep 9 06:57:01.041465 kernel: rtc_cmos 00:03: setting system clock to 2025-09-09T06:57:00 UTC (1757401020) Sep 9 06:57:01.041633 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 9 06:57:01.041653 kernel: intel_pstate: CPU model not supported Sep 9 06:57:01.041666 kernel: NET: Registered PF_INET6 protocol family Sep 9 06:57:01.041679 kernel: Segment Routing with IPv6 Sep 9 06:57:01.041691 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 06:57:01.041704 kernel: NET: Registered PF_PACKET protocol family Sep 9 06:57:01.041717 kernel: Key type dns_resolver registered Sep 9 06:57:01.041738 kernel: IPI shorthand broadcast: enabled Sep 9 06:57:01.041751 kernel: sched_clock: Marking stable (3815003959, 216017541)->(4172708100, -141686600) Sep 9 06:57:01.041764 kernel: registered taskstats version 1 Sep 9 06:57:01.041777 kernel: Loading compiled-in X.509 certificates Sep 9 06:57:01.041790 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 06:57:01.041802 kernel: Demotion targets for Node 0: null Sep 9 06:57:01.041814 kernel: Key type .fscrypt registered Sep 9 06:57:01.041827 kernel: Key type fscrypt-provisioning registered Sep 9 06:57:01.041839 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 06:57:01.041856 kernel: ima: Allocated hash algorithm: sha1 Sep 9 06:57:01.041878 kernel: ima: No architecture policies found Sep 9 06:57:01.041903 kernel: clk: Disabling unused clocks Sep 9 06:57:01.041916 kernel: Warning: unable to open an initial console. Sep 9 06:57:01.041928 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 06:57:01.041941 kernel: Write protecting the kernel read-only data: 24576k Sep 9 06:57:01.041970 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 06:57:01.041984 kernel: Run /init as init process Sep 9 06:57:01.042003 kernel: with arguments: Sep 9 06:57:01.042015 kernel: /init Sep 9 06:57:01.042028 kernel: with environment: Sep 9 06:57:01.042040 kernel: HOME=/ Sep 9 06:57:01.042052 kernel: TERM=linux Sep 9 06:57:01.042064 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 06:57:01.042078 systemd[1]: Successfully made /usr/ read-only. Sep 9 06:57:01.042095 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 06:57:01.042114 systemd[1]: Detected virtualization kvm. Sep 9 06:57:01.042128 systemd[1]: Detected architecture x86-64. Sep 9 06:57:01.042141 systemd[1]: Running in initrd. Sep 9 06:57:01.042153 systemd[1]: No hostname configured, using default hostname. Sep 9 06:57:01.042167 systemd[1]: Hostname set to . Sep 9 06:57:01.042180 systemd[1]: Initializing machine ID from VM UUID. Sep 9 06:57:01.042193 systemd[1]: Queued start job for default target initrd.target. Sep 9 06:57:01.042206 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 06:57:01.042225 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 06:57:01.042239 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 06:57:01.042253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 06:57:01.042266 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 06:57:01.042280 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 06:57:01.042295 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 06:57:01.042309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 06:57:01.042327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 06:57:01.042341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 06:57:01.042354 systemd[1]: Reached target paths.target - Path Units. Sep 9 06:57:01.042367 systemd[1]: Reached target slices.target - Slice Units. Sep 9 06:57:01.042381 systemd[1]: Reached target swap.target - Swaps. Sep 9 06:57:01.042394 systemd[1]: Reached target timers.target - Timer Units. Sep 9 06:57:01.042407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 06:57:01.042420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 06:57:01.042434 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 06:57:01.042452 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 06:57:01.042466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 06:57:01.042479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 06:57:01.042493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 06:57:01.042506 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 06:57:01.042520 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 06:57:01.042533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 06:57:01.042547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 06:57:01.042565 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 06:57:01.042579 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 06:57:01.042597 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 06:57:01.042611 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 06:57:01.042624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 06:57:01.042638 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 06:57:01.042657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 06:57:01.042671 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 06:57:01.042738 systemd-journald[229]: Collecting audit messages is disabled. Sep 9 06:57:01.042775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 06:57:01.042789 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 06:57:01.042802 kernel: Bridge firewalling registered Sep 9 06:57:01.042815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 06:57:01.042829 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 06:57:01.042844 systemd-journald[229]: Journal started Sep 9 06:57:01.042872 systemd-journald[229]: Runtime Journal (/run/log/journal/b20dd6da9097476cbd18e7547c53d3fe) is 4.7M, max 38.2M, 33.4M free. Sep 9 06:57:00.947619 systemd-modules-load[231]: Inserted module 'overlay' Sep 9 06:57:01.085731 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 06:57:01.002170 systemd-modules-load[231]: Inserted module 'br_netfilter' Sep 9 06:57:01.084667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 06:57:01.090124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 06:57:01.096001 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 06:57:01.098147 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 06:57:01.103980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 06:57:01.108158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 06:57:01.125646 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 06:57:01.127633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 06:57:01.133746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 06:57:01.137249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 06:57:01.139131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 06:57:01.143130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 06:57:01.168975 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 06:57:01.197744 systemd-resolved[268]: Positive Trust Anchors: Sep 9 06:57:01.197771 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 06:57:01.197813 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 06:57:01.202490 systemd-resolved[268]: Defaulting to hostname 'linux'. Sep 9 06:57:01.204302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 06:57:01.205521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 06:57:01.311051 kernel: SCSI subsystem initialized Sep 9 06:57:01.322987 kernel: Loading iSCSI transport class v2.0-870. Sep 9 06:57:01.340989 kernel: iscsi: registered transport (tcp) Sep 9 06:57:01.368106 kernel: iscsi: registered transport (qla4xxx) Sep 9 06:57:01.368222 kernel: QLogic iSCSI HBA Driver Sep 9 06:57:01.395224 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 06:57:01.414640 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 06:57:01.418076 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 06:57:01.481691 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 06:57:01.487541 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 06:57:01.549019 kernel: raid6: sse2x4 gen() 10091 MB/s Sep 9 06:57:01.566017 kernel: raid6: sse2x2 gen() 6581 MB/s Sep 9 06:57:01.584500 kernel: raid6: sse2x1 gen() 6744 MB/s Sep 9 06:57:01.584641 kernel: raid6: using algorithm sse2x4 gen() 10091 MB/s Sep 9 06:57:01.603511 kernel: raid6: .... xor() 6451 MB/s, rmw enabled Sep 9 06:57:01.603689 kernel: raid6: using ssse3x2 recovery algorithm Sep 9 06:57:01.629008 kernel: xor: automatically using best checksumming function avx Sep 9 06:57:01.999020 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 06:57:02.009564 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 06:57:02.013499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 06:57:02.043548 systemd-udevd[478]: Using default interface naming scheme 'v255'. Sep 9 06:57:02.053361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 06:57:02.058191 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 06:57:02.090995 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Sep 9 06:57:02.127629 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 06:57:02.130386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 06:57:02.270150 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 06:57:02.274483 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 06:57:02.398974 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 9 06:57:02.408975 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 06:57:02.417178 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 9 06:57:02.440992 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 06:57:02.453320 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 06:57:02.453363 kernel: GPT:17805311 != 125829119 Sep 9 06:57:02.453409 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 06:57:02.458104 kernel: GPT:17805311 != 125829119 Sep 9 06:57:02.458141 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 06:57:02.458159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 06:57:02.461350 kernel: ACPI: bus type USB registered Sep 9 06:57:02.461391 kernel: usbcore: registered new interface driver usbfs Sep 9 06:57:02.461979 kernel: AES CTR mode by8 optimization enabled Sep 9 06:57:02.464985 kernel: usbcore: registered new interface driver hub Sep 9 06:57:02.466971 kernel: usbcore: registered new device driver usb Sep 9 06:57:02.477729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 06:57:02.477929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 06:57:02.481552 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 06:57:02.488729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 06:57:02.508767 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 06:57:02.516008 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 06:57:02.516316 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 9 06:57:02.519971 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 9 06:57:02.524276 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 06:57:02.524532 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 9 06:57:02.526483 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 9 06:57:02.529603 kernel: hub 1-0:1.0: USB hub found Sep 9 06:57:02.529881 kernel: hub 1-0:1.0: 4 ports detected Sep 9 06:57:02.533969 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 9 06:57:02.534223 kernel: hub 2-0:1.0: USB hub found Sep 9 06:57:02.535720 kernel: hub 2-0:1.0: 4 ports detected Sep 9 06:57:02.554981 kernel: libata version 3.00 loaded. Sep 9 06:57:02.577215 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 06:57:02.580110 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 06:57:02.592096 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 06:57:02.592375 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 06:57:02.592596 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 06:57:02.618741 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 06:57:02.696654 kernel: scsi host0: ahci Sep 9 06:57:02.697009 kernel: scsi host1: ahci Sep 9 06:57:02.697298 kernel: scsi host2: ahci Sep 9 06:57:02.697531 kernel: scsi host3: ahci Sep 9 06:57:02.697723 kernel: scsi host4: ahci Sep 9 06:57:02.698238 kernel: scsi host5: ahci Sep 9 06:57:02.698478 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Sep 9 06:57:02.698500 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Sep 9 06:57:02.698517 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Sep 9 06:57:02.698541 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Sep 9 06:57:02.698559 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Sep 9 06:57:02.698576 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Sep 9 06:57:02.696360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 06:57:02.715911 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 06:57:02.716743 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 06:57:02.730410 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 06:57:02.750985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 06:57:02.753267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 06:57:02.770081 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 9 06:57:02.781648 disk-uuid[634]: Primary Header is updated. Sep 9 06:57:02.781648 disk-uuid[634]: Secondary Entries is updated. Sep 9 06:57:02.781648 disk-uuid[634]: Secondary Header is updated. Sep 9 06:57:02.788022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 06:57:02.793997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 06:57:02.920001 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 06:57:02.941092 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 06:57:02.941183 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 9 06:57:02.943045 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 06:57:02.947478 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 06:57:02.947512 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 06:57:02.950979 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 06:57:02.973273 kernel: usbcore: registered new interface driver usbhid Sep 9 06:57:02.973358 kernel: usbhid: USB HID core driver Sep 9 06:57:02.982066 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 9 06:57:02.982101 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 9 06:57:03.012741 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 06:57:03.024197 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 06:57:03.025013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 06:57:03.026693 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 06:57:03.030129 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 06:57:03.070972 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 06:57:03.795594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 06:57:03.798440 disk-uuid[635]: The operation has completed successfully. Sep 9 06:57:03.861261 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 06:57:03.861419 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 06:57:03.908672 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 06:57:03.943799 sh[660]: Success Sep 9 06:57:03.968481 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 06:57:03.968655 kernel: device-mapper: uevent: version 1.0.3 Sep 9 06:57:03.969557 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 06:57:03.983991 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Sep 9 06:57:04.037794 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 06:57:04.039901 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 06:57:04.049798 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 06:57:04.063993 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (672) Sep 9 06:57:04.068876 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 06:57:04.068909 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 06:57:04.078086 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 06:57:04.078133 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 06:57:04.081890 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 06:57:04.083266 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 06:57:04.084834 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 06:57:04.087123 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 06:57:04.089112 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 06:57:04.119988 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Sep 9 06:57:04.124041 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 06:57:04.124082 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 06:57:04.133462 kernel: BTRFS info (device vda6): turning on async discard Sep 9 06:57:04.133538 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 06:57:04.141016 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 06:57:04.142674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 06:57:04.145178 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 06:57:04.256665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 06:57:04.263170 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 06:57:04.436517 systemd-networkd[841]: lo: Link UP Sep 9 06:57:04.436537 systemd-networkd[841]: lo: Gained carrier Sep 9 06:57:04.445458 systemd-networkd[841]: Enumeration completed Sep 9 06:57:04.446187 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 06:57:04.446199 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 06:57:04.450306 systemd-networkd[841]: eth0: Link UP Sep 9 06:57:04.450331 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 06:57:04.450813 systemd-networkd[841]: eth0: Gained carrier Sep 9 06:57:04.450829 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 06:57:04.460306 systemd[1]: Reached target network.target - Network. Sep 9 06:57:04.501093 systemd-networkd[841]: eth0: DHCPv4 address 10.230.42.222/30, gateway 10.230.42.221 acquired from 10.230.42.221 Sep 9 06:57:04.520546 ignition[756]: Ignition 2.22.0 Sep 9 06:57:04.520576 ignition[756]: Stage: fetch-offline Sep 9 06:57:04.520726 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:04.520771 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:04.521049 ignition[756]: parsed url from cmdline: "" Sep 9 06:57:04.521056 ignition[756]: no config URL provided Sep 9 06:57:04.521066 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 06:57:04.525892 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 06:57:04.521117 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 9 06:57:04.521127 ignition[756]: failed to fetch config: resource requires networking Sep 9 06:57:04.521546 ignition[756]: Ignition finished successfully Sep 9 06:57:04.531229 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 06:57:04.584845 ignition[851]: Ignition 2.22.0 Sep 9 06:57:04.584869 ignition[851]: Stage: fetch Sep 9 06:57:04.585122 ignition[851]: no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:04.585151 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:04.585305 ignition[851]: parsed url from cmdline: "" Sep 9 06:57:04.585313 ignition[851]: no config URL provided Sep 9 06:57:04.585329 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 06:57:04.585349 ignition[851]: no config at "/usr/lib/ignition/user.ign" Sep 9 06:57:04.585592 ignition[851]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 9 06:57:04.586391 ignition[851]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 9 06:57:04.586458 ignition[851]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 9 06:57:04.603714 ignition[851]: GET result: OK Sep 9 06:57:04.604355 ignition[851]: parsing config with SHA512: 798e7dc917a51b42127c2417612278631ace72785590d4d60789a2ac4a60469983731a5df3cd653aaa94ebfb35760994b566891e2d494fd1134c47b75c7723e2 Sep 9 06:57:04.614535 unknown[851]: fetched base config from "system" Sep 9 06:57:04.614572 unknown[851]: fetched base config from "system" Sep 9 06:57:04.615169 ignition[851]: fetch: fetch complete Sep 9 06:57:04.614592 unknown[851]: fetched user config from "openstack" Sep 9 06:57:04.615178 ignition[851]: fetch: fetch passed Sep 9 06:57:04.617824 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 06:57:04.615247 ignition[851]: Ignition finished successfully Sep 9 06:57:04.621153 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 06:57:04.667845 ignition[857]: Ignition 2.22.0 Sep 9 06:57:04.667868 ignition[857]: Stage: kargs Sep 9 06:57:04.668086 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:04.668105 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:04.669308 ignition[857]: kargs: kargs passed Sep 9 06:57:04.672161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 06:57:04.669389 ignition[857]: Ignition finished successfully Sep 9 06:57:04.676331 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 06:57:04.742969 ignition[863]: Ignition 2.22.0 Sep 9 06:57:04.743004 ignition[863]: Stage: disks Sep 9 06:57:04.743225 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:04.743246 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:04.748202 ignition[863]: disks: disks passed Sep 9 06:57:04.748311 ignition[863]: Ignition finished successfully Sep 9 06:57:04.750010 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 06:57:04.751563 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 06:57:04.752405 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 06:57:04.753170 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 06:57:04.754727 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 06:57:04.756321 systemd[1]: Reached target basic.target - Basic System. Sep 9 06:57:04.759443 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 06:57:04.791112 systemd-fsck[871]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 06:57:04.795382 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 06:57:04.799205 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 06:57:04.929017 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 06:57:04.930881 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 06:57:04.932228 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 06:57:04.935738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 06:57:04.937733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 06:57:04.939744 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 06:57:04.942937 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 9 06:57:04.945946 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 06:57:04.946013 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 06:57:04.959045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 06:57:04.963140 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 06:57:04.973105 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (879) Sep 9 06:57:04.978532 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 06:57:04.978569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 06:57:04.990353 kernel: BTRFS info (device vda6): turning on async discard Sep 9 06:57:04.990429 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 06:57:04.994399 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 06:57:05.053001 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:05.065907 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 06:57:05.075403 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 9 06:57:05.083300 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 06:57:05.088743 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 06:57:05.213019 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 06:57:05.216155 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 06:57:05.219115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 06:57:05.241645 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 06:57:05.244015 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 06:57:05.267843 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 06:57:05.284172 ignition[997]: INFO : Ignition 2.22.0 Sep 9 06:57:05.284172 ignition[997]: INFO : Stage: mount Sep 9 06:57:05.286091 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:05.286091 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:05.286091 ignition[997]: INFO : mount: mount passed Sep 9 06:57:05.286091 ignition[997]: INFO : Ignition finished successfully Sep 9 06:57:05.287527 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 06:57:06.085004 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:06.142341 systemd-networkd[841]: eth0: Gained IPv6LL Sep 9 06:57:07.651618 systemd-networkd[841]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ab7:24:19ff:fee6:2ade/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ab7:24:19ff:fee6:2ade/64 assigned by NDisc. Sep 9 06:57:07.651636 systemd-networkd[841]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 06:57:08.097008 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:12.108001 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:12.115279 coreos-metadata[881]: Sep 09 06:57:12.115 WARN failed to locate config-drive, using the metadata service API instead Sep 9 06:57:12.136780 coreos-metadata[881]: Sep 09 06:57:12.136 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 06:57:12.149636 coreos-metadata[881]: Sep 09 06:57:12.149 INFO Fetch successful Sep 9 06:57:12.150832 coreos-metadata[881]: Sep 09 06:57:12.150 INFO wrote hostname srv-f5a1c.gb1.brightbox.com to /sysroot/etc/hostname Sep 9 06:57:12.153393 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 9 06:57:12.153576 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 9 06:57:12.157989 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 06:57:12.179182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 06:57:12.205017 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 9 06:57:12.207991 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 06:57:12.210981 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 06:57:12.215994 kernel: BTRFS info (device vda6): turning on async discard Sep 9 06:57:12.216035 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 06:57:12.218877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 06:57:12.264775 ignition[1032]: INFO : Ignition 2.22.0 Sep 9 06:57:12.264775 ignition[1032]: INFO : Stage: files Sep 9 06:57:12.266649 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:12.266649 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:12.266649 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 9 06:57:12.269433 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 06:57:12.269433 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 06:57:12.276856 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 06:57:12.276856 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 06:57:12.276856 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 06:57:12.273784 unknown[1032]: wrote ssh authorized keys file for user: core Sep 9 06:57:12.280547 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 06:57:12.280547 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 06:57:12.470597 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 06:57:13.778684 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 06:57:13.788002 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 06:57:13.797054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 06:57:13.797054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 06:57:13.797054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 06:57:13.797054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 06:57:13.797054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 06:57:13.797054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 06:57:14.107042 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 06:57:15.676197 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 06:57:15.676197 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 06:57:15.680782 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 06:57:15.682483 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 06:57:15.682483 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 06:57:15.685281 ignition[1032]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 06:57:15.685281 ignition[1032]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 06:57:15.685281 ignition[1032]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 06:57:15.685281 ignition[1032]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 06:57:15.685281 ignition[1032]: INFO : files: files passed Sep 9 06:57:15.685281 ignition[1032]: INFO : Ignition finished successfully Sep 9 06:57:15.688469 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 06:57:15.698162 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 06:57:15.700063 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 06:57:15.726672 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 06:57:15.727676 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 06:57:15.739422 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 06:57:15.739422 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 06:57:15.742184 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 06:57:15.742481 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 06:57:15.744681 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 06:57:15.747059 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 06:57:15.812942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 06:57:15.813300 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 06:57:15.815042 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 06:57:15.816316 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 06:57:15.818193 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 06:57:15.820252 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 06:57:15.860802 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 06:57:15.864071 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 06:57:15.889505 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 06:57:15.890415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 06:57:15.892133 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 06:57:15.893693 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 06:57:15.893980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 06:57:15.895521 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 06:57:15.896423 systemd[1]: Stopped target basic.target - Basic System. Sep 9 06:57:15.898003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 06:57:15.899333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 06:57:15.900736 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 06:57:15.902330 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 06:57:15.903872 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 06:57:15.905301 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 06:57:15.906941 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 06:57:15.908381 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 06:57:15.909929 systemd[1]: Stopped target swap.target - Swaps. Sep 9 06:57:15.911436 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 06:57:15.911842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 06:57:15.913285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 06:57:15.914285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 06:57:15.915670 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 06:57:15.915892 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 06:57:15.917351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 06:57:15.917664 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 06:57:15.919511 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 06:57:15.919795 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 06:57:15.921435 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 06:57:15.921620 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 06:57:15.931222 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 06:57:15.935240 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 06:57:15.935894 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 06:57:15.936267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 06:57:15.939205 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 06:57:15.939468 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 06:57:15.948869 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 06:57:15.949048 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 06:57:15.979093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 06:57:15.988130 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 06:57:15.989465 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 06:57:16.019439 ignition[1086]: INFO : Ignition 2.22.0 Sep 9 06:57:16.019439 ignition[1086]: INFO : Stage: umount Sep 9 06:57:16.021596 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 06:57:16.021596 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 06:57:16.025798 ignition[1086]: INFO : umount: umount passed Sep 9 06:57:16.026570 ignition[1086]: INFO : Ignition finished successfully Sep 9 06:57:16.027728 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 06:57:16.027928 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 06:57:16.029408 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 06:57:16.029598 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 06:57:16.030935 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 06:57:16.031068 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 06:57:16.032397 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 06:57:16.032482 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 06:57:16.033932 systemd[1]: Stopped target network.target - Network. Sep 9 06:57:16.035187 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 06:57:16.035314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 06:57:16.036598 systemd[1]: Stopped target paths.target - Path Units. Sep 9 06:57:16.037815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 06:57:16.038202 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 06:57:16.039268 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 06:57:16.040604 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 06:57:16.043011 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 06:57:16.043088 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 06:57:16.044300 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 06:57:16.044387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 06:57:16.045713 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 06:57:16.045858 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 06:57:16.047431 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 06:57:16.047587 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 06:57:16.048822 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 06:57:16.048929 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 06:57:16.050560 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 06:57:16.052381 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 06:57:16.056138 systemd-networkd[841]: eth0: DHCPv6 lease lost Sep 9 06:57:16.063838 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 06:57:16.065054 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 06:57:16.070835 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 06:57:16.071256 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 06:57:16.071472 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 06:57:16.074282 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 06:57:16.076140 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 06:57:16.077046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 06:57:16.077123 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 06:57:16.079756 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 06:57:16.081627 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 06:57:16.081708 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 06:57:16.084392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 06:57:16.084465 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 06:57:16.087558 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 06:57:16.087626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 06:57:16.090674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 06:57:16.090744 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 06:57:16.092802 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 06:57:16.096911 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 06:57:16.097061 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 06:57:16.107468 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 06:57:16.108444 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 06:57:16.109858 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 06:57:16.111042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 06:57:16.112654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 06:57:16.112710 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 06:57:16.116651 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 06:57:16.116845 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 06:57:16.118907 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 06:57:16.119056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 06:57:16.120461 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 06:57:16.120565 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 06:57:16.123090 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 06:57:16.125288 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 06:57:16.125364 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 06:57:16.128105 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 06:57:16.128181 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 06:57:16.130295 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 06:57:16.130428 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 06:57:16.132252 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 06:57:16.132360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 06:57:16.133893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 06:57:16.134009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 06:57:16.143776 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 06:57:16.143868 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 06:57:16.143973 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 06:57:16.144047 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 06:57:16.144762 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 06:57:16.144924 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 06:57:16.150328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 06:57:16.150481 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 06:57:16.152418 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 06:57:16.154727 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 06:57:16.184121 systemd[1]: Switching root. Sep 9 06:57:16.234599 systemd-journald[229]: Journal stopped Sep 9 06:57:17.849584 systemd-journald[229]: Received SIGTERM from PID 1 (systemd). Sep 9 06:57:17.849783 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 06:57:17.849838 kernel: SELinux: policy capability open_perms=1 Sep 9 06:57:17.849885 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 06:57:17.849921 kernel: SELinux: policy capability always_check_network=0 Sep 9 06:57:17.849976 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 06:57:17.850008 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 06:57:17.850036 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 06:57:17.850070 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 06:57:17.850102 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 06:57:17.850122 kernel: audit: type=1403 audit(1757401036.522:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 06:57:17.850173 systemd[1]: Successfully loaded SELinux policy in 81.967ms. Sep 9 06:57:17.850241 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.478ms. Sep 9 06:57:17.850277 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 06:57:17.850317 systemd[1]: Detected virtualization kvm. Sep 9 06:57:17.850359 systemd[1]: Detected architecture x86-64. Sep 9 06:57:17.850387 systemd[1]: Detected first boot. Sep 9 06:57:17.850408 systemd[1]: Hostname set to . Sep 9 06:57:17.850426 systemd[1]: Initializing machine ID from VM UUID. Sep 9 06:57:17.850464 zram_generator::config[1130]: No configuration found. Sep 9 06:57:17.850515 kernel: Guest personality initialized and is inactive Sep 9 06:57:17.850547 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 06:57:17.850584 kernel: Initialized host personality Sep 9 06:57:17.850604 kernel: NET: Registered PF_VSOCK protocol family Sep 9 06:57:17.850637 systemd[1]: Populated /etc with preset unit settings. Sep 9 06:57:17.850666 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 06:57:17.850689 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 06:57:17.850708 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 06:57:17.850742 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 06:57:17.850775 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 06:57:17.850796 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 06:57:17.850829 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 06:57:17.850858 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 06:57:17.850908 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 06:57:17.850944 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 06:57:17.851011 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 06:57:17.851054 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 06:57:17.851077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 06:57:17.851096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 06:57:17.851115 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 06:57:17.851141 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 06:57:17.851171 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 06:57:17.851210 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 06:57:17.851241 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 06:57:17.851270 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 06:57:17.851298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 06:57:17.851329 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 06:57:17.851360 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 06:57:17.851380 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 06:57:17.851399 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 06:57:17.851427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 06:57:17.851462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 06:57:17.851501 systemd[1]: Reached target slices.target - Slice Units. Sep 9 06:57:17.851533 systemd[1]: Reached target swap.target - Swaps. Sep 9 06:57:17.851554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 06:57:17.851581 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 06:57:17.851610 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 06:57:17.851639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 06:57:17.851669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 06:57:17.851698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 06:57:17.851735 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 06:57:17.851771 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 06:57:17.851807 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 06:57:17.851828 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 06:57:17.851847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:17.851866 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 06:57:17.851893 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 06:57:17.851926 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 06:57:17.853996 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 06:57:17.854065 systemd[1]: Reached target machines.target - Containers. Sep 9 06:57:17.854088 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 06:57:17.854118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 06:57:17.854145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 06:57:17.854188 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 06:57:17.854232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 06:57:17.854257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 06:57:17.854284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 06:57:17.854339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 06:57:17.854370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 06:57:17.854400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 06:57:17.854429 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 06:57:17.854449 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 06:57:17.854477 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 06:57:17.854514 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 06:57:17.854550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 06:57:17.854610 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 06:57:17.854664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 06:57:17.854696 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 06:57:17.854744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 06:57:17.854790 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 06:57:17.854823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 06:57:17.854885 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 06:57:17.854923 systemd[1]: Stopped verity-setup.service. Sep 9 06:57:17.854945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:17.856069 kernel: fuse: init (API version 7.41) Sep 9 06:57:17.856118 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 06:57:17.856149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 06:57:17.856171 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 06:57:17.856190 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 06:57:17.856219 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 06:57:17.856240 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 06:57:17.856268 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 06:57:17.856288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 06:57:17.856308 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 06:57:17.856351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 06:57:17.856373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 06:57:17.856392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 06:57:17.856420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 06:57:17.856440 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 06:57:17.856468 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 06:57:17.856509 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 06:57:17.856531 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 06:57:17.856614 systemd-journald[1224]: Collecting audit messages is disabled. Sep 9 06:57:17.856695 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 06:57:17.856720 systemd-journald[1224]: Journal started Sep 9 06:57:17.856752 systemd-journald[1224]: Runtime Journal (/run/log/journal/b20dd6da9097476cbd18e7547c53d3fe) is 4.7M, max 38.2M, 33.4M free. Sep 9 06:57:17.859189 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 06:57:17.400783 systemd[1]: Queued start job for default target multi-user.target. Sep 9 06:57:17.416949 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 06:57:17.417831 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 06:57:17.867092 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 06:57:17.872989 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 06:57:17.884984 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 06:57:17.889228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 06:57:17.901337 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 06:57:17.922001 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 06:57:17.922096 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 06:57:17.922126 kernel: loop: module loaded Sep 9 06:57:17.924984 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 06:57:17.935004 kernel: ACPI: bus type drm_connector registered Sep 9 06:57:17.941147 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 06:57:17.947977 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 06:57:17.954432 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 06:57:17.955624 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 06:57:17.956471 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 06:57:17.958735 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 06:57:17.959182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 06:57:17.961183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 06:57:17.963417 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 06:57:17.965829 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 06:57:17.967249 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 06:57:17.968834 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 06:57:18.000406 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 06:57:18.007229 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 06:57:18.008269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 06:57:18.018254 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 06:57:18.032256 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 06:57:18.036283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 06:57:18.043591 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 06:57:18.066997 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 06:57:18.122987 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 06:57:18.123210 systemd-journald[1224]: Time spent on flushing to /var/log/journal/b20dd6da9097476cbd18e7547c53d3fe is 237.867ms for 1172 entries. Sep 9 06:57:18.123210 systemd-journald[1224]: System Journal (/var/log/journal/b20dd6da9097476cbd18e7547c53d3fe) is 8M, max 584.8M, 576.8M free. Sep 9 06:57:18.417028 systemd-journald[1224]: Received client request to flush runtime journal. Sep 9 06:57:18.417175 kernel: loop1: detected capacity change from 0 to 224512 Sep 9 06:57:18.142873 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 06:57:18.152561 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 9 06:57:18.152581 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 9 06:57:18.157990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 06:57:18.184338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 06:57:18.195242 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 06:57:18.421379 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 06:57:18.424022 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 06:57:18.425086 kernel: loop2: detected capacity change from 0 to 8 Sep 9 06:57:18.461215 kernel: loop3: detected capacity change from 0 to 110984 Sep 9 06:57:18.493706 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 06:57:18.505162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 06:57:18.535539 kernel: loop4: detected capacity change from 0 to 128016 Sep 9 06:57:18.574982 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 06:57:18.576645 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Sep 9 06:57:18.577207 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Sep 9 06:57:18.584703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 06:57:18.615022 kernel: loop6: detected capacity change from 0 to 8 Sep 9 06:57:18.616156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 06:57:18.638088 kernel: loop7: detected capacity change from 0 to 110984 Sep 9 06:57:18.684619 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 9 06:57:18.688006 (sd-merge)[1293]: Merged extensions into '/usr'. Sep 9 06:57:18.706624 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 06:57:18.706679 systemd[1]: Reloading... Sep 9 06:57:19.105169 zram_generator::config[1321]: No configuration found. Sep 9 06:57:19.155799 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 06:57:19.470239 systemd[1]: Reloading finished in 761 ms. Sep 9 06:57:19.514726 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 06:57:19.523765 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 06:57:19.530978 systemd[1]: Starting ensure-sysext.service... Sep 9 06:57:19.535414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 06:57:19.573561 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Sep 9 06:57:19.573801 systemd[1]: Reloading... Sep 9 06:57:19.596045 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 06:57:19.596108 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 06:57:19.596658 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 06:57:19.597298 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 06:57:19.599856 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 06:57:19.601303 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Sep 9 06:57:19.601420 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Sep 9 06:57:19.610929 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 06:57:19.610947 systemd-tmpfiles[1379]: Skipping /boot Sep 9 06:57:19.633064 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 06:57:19.633351 systemd-tmpfiles[1379]: Skipping /boot Sep 9 06:57:19.671998 zram_generator::config[1406]: No configuration found. Sep 9 06:57:19.968037 systemd[1]: Reloading finished in 393 ms. Sep 9 06:57:19.991033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 06:57:20.005965 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 06:57:20.017683 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 06:57:20.023274 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 06:57:20.030328 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 06:57:20.035629 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 06:57:20.042101 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 06:57:20.049488 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 06:57:20.056868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:20.057182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 06:57:20.060215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 06:57:20.072580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 06:57:20.080824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 06:57:20.082226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 06:57:20.082447 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 06:57:20.082616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:20.089598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:20.090377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 06:57:20.090632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 06:57:20.090772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 06:57:20.096011 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 06:57:20.098020 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:20.106041 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:20.106406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 06:57:20.134526 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 06:57:20.135968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 06:57:20.137115 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 06:57:20.137355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 06:57:20.140059 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 06:57:20.154710 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 06:57:20.160878 systemd[1]: Finished ensure-sysext.service. Sep 9 06:57:20.177579 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 06:57:20.179847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 06:57:20.182021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 06:57:20.183468 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 06:57:20.184359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 06:57:20.193446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 06:57:20.201066 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 06:57:20.202575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 06:57:20.204040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 06:57:20.206426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 06:57:20.208583 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 06:57:20.209728 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 06:57:20.212394 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 06:57:20.219056 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 06:57:20.220931 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 06:57:20.251382 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Sep 9 06:57:20.263647 augenrules[1505]: No rules Sep 9 06:57:20.266401 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 06:57:20.268410 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 06:57:20.274930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 06:57:20.304000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 06:57:20.313257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 06:57:20.434525 systemd-resolved[1467]: Positive Trust Anchors: Sep 9 06:57:20.435263 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 06:57:20.435314 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 06:57:20.452127 systemd-resolved[1467]: Using system hostname 'srv-f5a1c.gb1.brightbox.com'. Sep 9 06:57:20.458910 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 06:57:20.465969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 06:57:20.506844 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 06:57:20.508232 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 06:57:20.509627 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 06:57:20.510406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 06:57:20.512551 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 06:57:20.514078 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 06:57:20.515413 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 06:57:20.515471 systemd[1]: Reached target paths.target - Path Units. Sep 9 06:57:20.516664 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 06:57:20.517862 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 06:57:20.519564 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 06:57:20.521036 systemd[1]: Reached target timers.target - Timer Units. Sep 9 06:57:20.523731 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 06:57:20.531039 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 06:57:20.540296 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 06:57:20.543394 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 06:57:20.544656 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 06:57:20.556137 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 06:57:20.557615 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 06:57:20.562073 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 06:57:20.565738 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 06:57:20.568051 systemd[1]: Reached target basic.target - Basic System. Sep 9 06:57:20.568775 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 06:57:20.568832 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 06:57:20.573339 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 06:57:20.576541 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 06:57:20.580829 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 06:57:20.585692 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 06:57:20.597263 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 06:57:20.599074 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 06:57:20.605543 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 06:57:20.614815 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 06:57:20.623279 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 06:57:20.631981 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:20.636303 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 06:57:20.649917 jq[1548]: false Sep 9 06:57:20.650764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 06:57:20.668349 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 06:57:20.671559 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 06:57:20.673346 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 06:57:20.675631 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 06:57:20.690692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 06:57:20.694628 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 06:57:20.695939 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 06:57:20.697208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 06:57:20.719980 extend-filesystems[1549]: Found /dev/vda6 Sep 9 06:57:20.723148 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing passwd entry cache Sep 9 06:57:20.720321 oslogin_cache_refresh[1550]: Refreshing passwd entry cache Sep 9 06:57:20.725364 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 06:57:20.725722 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 06:57:20.734566 extend-filesystems[1549]: Found /dev/vda9 Sep 9 06:57:20.754560 extend-filesystems[1549]: Checking size of /dev/vda9 Sep 9 06:57:20.757464 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting users, quitting Sep 9 06:57:20.757464 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 06:57:20.757394 oslogin_cache_refresh[1550]: Failure getting users, quitting Sep 9 06:57:20.757446 oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 06:57:20.757727 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing group entry cache Sep 9 06:57:20.757647 oslogin_cache_refresh[1550]: Refreshing group entry cache Sep 9 06:57:20.759766 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting groups, quitting Sep 9 06:57:20.759766 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 06:57:20.759756 oslogin_cache_refresh[1550]: Failure getting groups, quitting Sep 9 06:57:20.759773 oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 06:57:20.781662 extend-filesystems[1549]: Resized partition /dev/vda9 Sep 9 06:57:20.788837 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 06:57:20.801135 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 06:57:20.801533 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 06:57:20.805884 dbus-daemon[1546]: [system] SELinux support is enabled Sep 9 06:57:20.809143 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 9 06:57:20.808473 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 06:57:20.813042 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 06:57:20.813394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 06:57:20.816134 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 06:57:20.816186 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 06:57:20.817039 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 06:57:20.817066 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 06:57:20.831992 jq[1562]: true Sep 9 06:57:20.841695 tar[1567]: linux-amd64/LICENSE Sep 9 06:57:20.841695 tar[1567]: linux-amd64/helm Sep 9 06:57:20.870592 update_engine[1560]: I20250909 06:57:20.870386 1560 main.cc:92] Flatcar Update Engine starting Sep 9 06:57:20.887996 systemd[1]: Started update-engine.service - Update Engine. Sep 9 06:57:20.898321 update_engine[1560]: I20250909 06:57:20.891403 1560 update_check_scheduler.cc:74] Next update check in 8m5s Sep 9 06:57:20.906623 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 06:57:20.918475 jq[1588]: true Sep 9 06:57:20.937445 systemd-networkd[1520]: lo: Link UP Sep 9 06:57:20.938060 systemd-networkd[1520]: lo: Gained carrier Sep 9 06:57:20.946759 systemd-networkd[1520]: Enumeration completed Sep 9 06:57:20.947062 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 06:57:20.948024 systemd[1]: Reached target network.target - Network. Sep 9 06:57:20.956532 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 06:57:20.962460 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 06:57:20.967528 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 06:57:21.189258 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Sep 9 06:57:21.206454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 06:57:21.216682 systemd[1]: Starting sshkeys.service... Sep 9 06:57:21.223921 (ntainerd)[1614]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 06:57:21.237436 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 06:57:21.250909 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 9 06:57:21.254845 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 06:57:21.281402 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 06:57:21.281402 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 9 06:57:21.281402 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 9 06:57:21.299448 extend-filesystems[1549]: Resized filesystem in /dev/vda9 Sep 9 06:57:21.283640 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 06:57:21.306171 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 06:57:21.284188 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 06:57:21.373673 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 06:57:21.384170 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 06:57:21.463034 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:21.599295 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 06:57:21.603506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 06:57:21.625204 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 06:57:21.675528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 06:57:21.680733 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 06:57:21.686686 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 06:57:21.688562 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 06:57:21.693581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 06:57:21.738104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 06:57:21.793719 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 06:57:21.800496 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 06:57:21.808001 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 06:57:21.809834 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 06:57:21.833431 systemd-logind[1558]: New seat seat0. Sep 9 06:57:21.841496 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 06:57:21.896764 systemd-networkd[1520]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 06:57:21.896781 systemd-networkd[1520]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 06:57:21.901884 systemd-networkd[1520]: eth0: Link UP Sep 9 06:57:21.902429 systemd-networkd[1520]: eth0: Gained carrier Sep 9 06:57:21.902477 systemd-networkd[1520]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 06:57:21.945630 systemd-networkd[1520]: eth0: DHCPv4 address 10.230.42.222/30, gateway 10.230.42.221 acquired from 10.230.42.221 Sep 9 06:57:21.945911 dbus-daemon[1546]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1520 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 06:57:21.950792 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 9 06:57:21.951363 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 9 06:57:21.957856 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 06:57:21.961130 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 06:57:21.977078 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 06:57:22.073993 kernel: ACPI: button: Power Button [PWRF] Sep 9 06:57:22.084035 containerd[1614]: time="2025-09-09T06:57:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 06:57:22.088651 containerd[1614]: time="2025-09-09T06:57:22.088611683Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 06:57:22.173637 containerd[1614]: time="2025-09-09T06:57:22.173580987Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.225µs" Sep 9 06:57:22.173809 containerd[1614]: time="2025-09-09T06:57:22.173782619Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 06:57:22.173939 containerd[1614]: time="2025-09-09T06:57:22.173913794Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 06:57:22.175843 containerd[1614]: time="2025-09-09T06:57:22.175419935Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 06:57:22.176757 containerd[1614]: time="2025-09-09T06:57:22.176730322Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 06:57:22.176880 containerd[1614]: time="2025-09-09T06:57:22.176855514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 06:57:22.178018 containerd[1614]: time="2025-09-09T06:57:22.177948833Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 06:57:22.179002 containerd[1614]: time="2025-09-09T06:57:22.178552568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 06:57:22.179002 containerd[1614]: time="2025-09-09T06:57:22.178845532Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 06:57:22.179002 containerd[1614]: time="2025-09-09T06:57:22.178869751Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 06:57:22.179002 containerd[1614]: time="2025-09-09T06:57:22.178888384Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 06:57:22.179002 containerd[1614]: time="2025-09-09T06:57:22.178902506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 06:57:22.181943 containerd[1614]: time="2025-09-09T06:57:22.180412316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 06:57:22.183470 containerd[1614]: time="2025-09-09T06:57:22.183439762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 06:57:22.184003 containerd[1614]: time="2025-09-09T06:57:22.183973058Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 06:57:22.184989 containerd[1614]: time="2025-09-09T06:57:22.184456888Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 06:57:22.184989 containerd[1614]: time="2025-09-09T06:57:22.184525878Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 06:57:22.184989 containerd[1614]: time="2025-09-09T06:57:22.184872264Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 06:57:22.185634 containerd[1614]: time="2025-09-09T06:57:22.185607367Z" level=info msg="metadata content store policy set" policy=shared Sep 9 06:57:22.198103 containerd[1614]: time="2025-09-09T06:57:22.198047305Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 06:57:22.198508 containerd[1614]: time="2025-09-09T06:57:22.198456078Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200001839Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200040004Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200083520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200104638Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200154427Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200204350Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200247026Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200267866Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200346843Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 06:57:22.200704 containerd[1614]: time="2025-09-09T06:57:22.200428438Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202021467Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202090686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202121973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202161793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202180763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202196747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202232963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202253688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202270585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 06:57:22.203977 containerd[1614]: time="2025-09-09T06:57:22.202286392Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 06:57:22.205214 containerd[1614]: time="2025-09-09T06:57:22.204493973Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 06:57:22.205214 containerd[1614]: time="2025-09-09T06:57:22.204654356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 06:57:22.205214 containerd[1614]: time="2025-09-09T06:57:22.204680696Z" level=info msg="Start snapshots syncer" Sep 9 06:57:22.205214 containerd[1614]: time="2025-09-09T06:57:22.204773409Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 06:57:22.205739 containerd[1614]: time="2025-09-09T06:57:22.205682155Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 06:57:22.213010 containerd[1614]: time="2025-09-09T06:57:22.208039270Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 06:57:22.213294 containerd[1614]: time="2025-09-09T06:57:22.213232857Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 06:57:22.213573 containerd[1614]: time="2025-09-09T06:57:22.213538485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 06:57:22.213660 containerd[1614]: time="2025-09-09T06:57:22.213608057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 06:57:22.213660 containerd[1614]: time="2025-09-09T06:57:22.213634559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 06:57:22.213731 containerd[1614]: time="2025-09-09T06:57:22.213656075Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 06:57:22.213731 containerd[1614]: time="2025-09-09T06:57:22.213695787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 06:57:22.213731 containerd[1614]: time="2025-09-09T06:57:22.213718030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 06:57:22.213884 containerd[1614]: time="2025-09-09T06:57:22.213738577Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 06:57:22.213884 containerd[1614]: time="2025-09-09T06:57:22.213808112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 06:57:22.213884 containerd[1614]: time="2025-09-09T06:57:22.213832241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 06:57:22.213884 containerd[1614]: time="2025-09-09T06:57:22.213854262Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.213931683Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215337904Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215362704Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215397782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215415328Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215433904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215456001Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215486161Z" level=info msg="runtime interface created" Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215496648Z" level=info msg="created NRI interface" Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215513039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215543705Z" level=info msg="Connect containerd service" Sep 9 06:57:22.216046 containerd[1614]: time="2025-09-09T06:57:22.215595795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 06:57:22.224982 containerd[1614]: time="2025-09-09T06:57:22.223729057Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 06:57:22.287970 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 06:57:22.292753 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 06:57:22.296660 dbus-daemon[1546]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1656 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 06:57:22.305919 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 06:57:22.577441 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 06:57:22.584351 systemd[1]: Started sshd@0-10.230.42.222:22-139.178.68.195:60718.service - OpenSSH per-connection server daemon (139.178.68.195:60718). Sep 9 06:57:22.647592 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 06:57:22.657997 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 06:57:22.742380 polkitd[1663]: Started polkitd version 126 Sep 9 06:57:22.776475 polkitd[1663]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 06:57:22.776967 polkitd[1663]: Loading rules from directory /run/polkit-1/rules.d Sep 9 06:57:22.782330 polkitd[1663]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 06:57:22.782947 polkitd[1663]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 06:57:22.783023 polkitd[1663]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 06:57:22.783095 polkitd[1663]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 06:57:22.793861 polkitd[1663]: Finished loading, compiling and executing 2 rules Sep 9 06:57:22.799073 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 06:57:22.800599 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 06:57:22.807592 polkitd[1663]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 06:57:22.857589 systemd-hostnamed[1656]: Hostname set to (static) Sep 9 06:57:22.858648 containerd[1614]: time="2025-09-09T06:57:22.856447730Z" level=info msg="Start subscribing containerd event" Sep 9 06:57:22.861982 containerd[1614]: time="2025-09-09T06:57:22.861432287Z" level=info msg="Start recovering state" Sep 9 06:57:22.867179 containerd[1614]: time="2025-09-09T06:57:22.867149696Z" level=info msg="Start event monitor" Sep 9 06:57:22.867480 containerd[1614]: time="2025-09-09T06:57:22.867451995Z" level=info msg="Start cni network conf syncer for default" Sep 9 06:57:22.867674 containerd[1614]: time="2025-09-09T06:57:22.867640919Z" level=info msg="Start streaming server" Sep 9 06:57:22.867863 containerd[1614]: time="2025-09-09T06:57:22.867839319Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 06:57:22.868086 containerd[1614]: time="2025-09-09T06:57:22.868060862Z" level=info msg="runtime interface starting up..." Sep 9 06:57:22.871018 containerd[1614]: time="2025-09-09T06:57:22.868631688Z" level=info msg="starting plugins..." Sep 9 06:57:22.871018 containerd[1614]: time="2025-09-09T06:57:22.868780732Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 06:57:22.871398 containerd[1614]: time="2025-09-09T06:57:22.871302904Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 06:57:22.871695 containerd[1614]: time="2025-09-09T06:57:22.871652404Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 06:57:22.872273 containerd[1614]: time="2025-09-09T06:57:22.872246120Z" level=info msg="containerd successfully booted in 0.790949s" Sep 9 06:57:22.875583 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 06:57:22.919775 tar[1567]: linux-amd64/README.md Sep 9 06:57:22.961060 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 06:57:23.307511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 06:57:23.331387 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 06:57:23.353102 systemd-logind[1558]: Watching system buttons on /dev/input/event3 (Power Button) Sep 9 06:57:23.553269 systemd-networkd[1520]: eth0: Gained IPv6LL Sep 9 06:57:23.569156 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 9 06:57:23.577899 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 06:57:23.677946 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 06:57:23.691752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:57:23.719431 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 06:57:23.788105 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 06:57:23.818361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 06:57:23.979895 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 60718 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:23.983393 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:23.999246 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 06:57:24.002065 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 06:57:24.027288 systemd-logind[1558]: New session 1 of user core. Sep 9 06:57:24.047191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 06:57:24.054152 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 06:57:24.077469 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 06:57:24.082751 systemd-logind[1558]: New session c1 of user core. Sep 9 06:57:24.278822 systemd[1720]: Queued start job for default target default.target. Sep 9 06:57:24.291708 systemd[1720]: Created slice app.slice - User Application Slice. Sep 9 06:57:24.292197 systemd[1720]: Reached target paths.target - Paths. Sep 9 06:57:24.292753 systemd[1720]: Reached target timers.target - Timers. Sep 9 06:57:24.298127 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 06:57:24.323680 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 06:57:24.324159 systemd[1720]: Reached target sockets.target - Sockets. Sep 9 06:57:24.324443 systemd[1720]: Reached target basic.target - Basic System. Sep 9 06:57:24.324648 systemd[1720]: Reached target default.target - Main User Target. Sep 9 06:57:24.324723 systemd[1720]: Startup finished in 228ms. Sep 9 06:57:24.325069 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 06:57:24.526433 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 06:57:24.645008 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:24.652987 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:25.158502 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 9 06:57:25.166169 systemd-networkd[1520]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ab7:24:19ff:fee6:2ade/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ab7:24:19ff:fee6:2ade/64 assigned by NDisc. Sep 9 06:57:25.166182 systemd-networkd[1520]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 06:57:25.175445 systemd[1]: Started sshd@1-10.230.42.222:22-139.178.68.195:60734.service - OpenSSH per-connection server daemon (139.178.68.195:60734). Sep 9 06:57:25.555372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:57:25.568578 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 06:57:26.200002 sshd[1733]: Accepted publickey for core from 139.178.68.195 port 60734 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:26.202459 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:26.213472 systemd-logind[1558]: New session 2 of user core. Sep 9 06:57:26.224265 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 06:57:26.546634 kubelet[1742]: E0909 06:57:26.546433 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 06:57:26.550063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 06:57:26.550382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 06:57:26.551158 systemd[1]: kubelet.service: Consumed 1.660s CPU time, 262.4M memory peak. Sep 9 06:57:26.675035 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:26.675240 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:26.816007 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 9 06:57:26.829125 sshd[1747]: Connection closed by 139.178.68.195 port 60734 Sep 9 06:57:26.830197 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:26.838386 systemd[1]: sshd@1-10.230.42.222:22-139.178.68.195:60734.service: Deactivated successfully. Sep 9 06:57:26.844449 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 06:57:26.849448 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Sep 9 06:57:26.852254 systemd-logind[1558]: Removed session 2. Sep 9 06:57:26.906992 login[1654]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 06:57:26.911210 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 06:57:26.917211 systemd-logind[1558]: New session 3 of user core. Sep 9 06:57:26.928432 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 06:57:26.933097 systemd-logind[1558]: New session 4 of user core. Sep 9 06:57:26.940368 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 06:57:26.988354 systemd[1]: Started sshd@2-10.230.42.222:22-139.178.68.195:60746.service - OpenSSH per-connection server daemon (139.178.68.195:60746). Sep 9 06:57:27.903334 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 60746 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:27.905890 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:27.912491 systemd-logind[1558]: New session 5 of user core. Sep 9 06:57:27.928324 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 06:57:28.523028 sshd[1789]: Connection closed by 139.178.68.195 port 60746 Sep 9 06:57:28.524039 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:28.529649 systemd[1]: sshd@2-10.230.42.222:22-139.178.68.195:60746.service: Deactivated successfully. Sep 9 06:57:28.532496 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 06:57:28.534333 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Sep 9 06:57:28.536648 systemd-logind[1558]: Removed session 5. Sep 9 06:57:30.698989 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:30.699211 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 06:57:30.709572 coreos-metadata[1545]: Sep 09 06:57:30.709 WARN failed to locate config-drive, using the metadata service API instead Sep 9 06:57:30.710142 coreos-metadata[1629]: Sep 09 06:57:30.710 WARN failed to locate config-drive, using the metadata service API instead Sep 9 06:57:30.736059 coreos-metadata[1629]: Sep 09 06:57:30.735 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 9 06:57:30.736257 coreos-metadata[1545]: Sep 09 06:57:30.736 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 9 06:57:30.743252 coreos-metadata[1545]: Sep 09 06:57:30.743 INFO Fetch failed with 404: resource not found Sep 9 06:57:30.743673 coreos-metadata[1545]: Sep 09 06:57:30.743 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 06:57:30.744348 coreos-metadata[1545]: Sep 09 06:57:30.744 INFO Fetch successful Sep 9 06:57:30.744606 coreos-metadata[1545]: Sep 09 06:57:30.744 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 9 06:57:30.757440 coreos-metadata[1545]: Sep 09 06:57:30.757 INFO Fetch successful Sep 9 06:57:30.757732 coreos-metadata[1545]: Sep 09 06:57:30.757 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 9 06:57:30.761473 coreos-metadata[1629]: Sep 09 06:57:30.761 INFO Fetch successful Sep 9 06:57:30.761648 coreos-metadata[1629]: Sep 09 06:57:30.761 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 06:57:30.783096 coreos-metadata[1545]: Sep 09 06:57:30.783 INFO Fetch successful Sep 9 06:57:30.783546 coreos-metadata[1545]: Sep 09 06:57:30.783 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 9 06:57:30.789230 coreos-metadata[1629]: Sep 09 06:57:30.789 INFO Fetch successful Sep 9 06:57:30.798228 coreos-metadata[1545]: Sep 09 06:57:30.798 INFO Fetch successful Sep 9 06:57:30.807407 coreos-metadata[1545]: Sep 09 06:57:30.798 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 9 06:57:30.808094 unknown[1629]: wrote ssh authorized keys file for user: core Sep 9 06:57:30.831907 update-ssh-keys[1798]: Updated "/home/core/.ssh/authorized_keys" Sep 9 06:57:30.833947 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 06:57:30.838279 systemd[1]: Finished sshkeys.service. Sep 9 06:57:30.859367 coreos-metadata[1545]: Sep 09 06:57:30.859 INFO Fetch successful Sep 9 06:57:30.896195 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 06:57:30.897146 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 06:57:30.897369 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 06:57:30.897709 systemd[1]: Startup finished in 3.896s (kernel) + 15.843s (initrd) + 14.455s (userspace) = 34.194s. Sep 9 06:57:36.627298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 06:57:36.630691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:57:36.980241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:57:36.992670 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 06:57:37.077099 kubelet[1815]: E0909 06:57:37.076905 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 06:57:37.082163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 06:57:37.082552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 06:57:37.083496 systemd[1]: kubelet.service: Consumed 392ms CPU time, 109.1M memory peak. Sep 9 06:57:38.678534 systemd[1]: Started sshd@3-10.230.42.222:22-139.178.68.195:36210.service - OpenSSH per-connection server daemon (139.178.68.195:36210). Sep 9 06:57:39.582348 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 36210 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:39.584414 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:39.593055 systemd-logind[1558]: New session 6 of user core. Sep 9 06:57:39.598251 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 06:57:40.201676 sshd[1826]: Connection closed by 139.178.68.195 port 36210 Sep 9 06:57:40.200731 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:40.206296 systemd[1]: sshd@3-10.230.42.222:22-139.178.68.195:36210.service: Deactivated successfully. Sep 9 06:57:40.209456 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 06:57:40.211116 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Sep 9 06:57:40.214432 systemd-logind[1558]: Removed session 6. Sep 9 06:57:40.356614 systemd[1]: Started sshd@4-10.230.42.222:22-139.178.68.195:37398.service - OpenSSH per-connection server daemon (139.178.68.195:37398). Sep 9 06:57:41.258558 sshd[1832]: Accepted publickey for core from 139.178.68.195 port 37398 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:41.260370 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:41.268081 systemd-logind[1558]: New session 7 of user core. Sep 9 06:57:41.279155 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 06:57:41.871043 sshd[1835]: Connection closed by 139.178.68.195 port 37398 Sep 9 06:57:41.872107 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:41.878500 systemd[1]: sshd@4-10.230.42.222:22-139.178.68.195:37398.service: Deactivated successfully. Sep 9 06:57:41.881619 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 06:57:41.884309 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Sep 9 06:57:41.885988 systemd-logind[1558]: Removed session 7. Sep 9 06:57:42.031111 systemd[1]: Started sshd@5-10.230.42.222:22-139.178.68.195:37406.service - OpenSSH per-connection server daemon (139.178.68.195:37406). Sep 9 06:57:42.946486 sshd[1841]: Accepted publickey for core from 139.178.68.195 port 37406 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:42.948653 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:42.956450 systemd-logind[1558]: New session 8 of user core. Sep 9 06:57:42.965284 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 06:57:43.563685 sshd[1844]: Connection closed by 139.178.68.195 port 37406 Sep 9 06:57:43.564714 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:43.570533 systemd[1]: sshd@5-10.230.42.222:22-139.178.68.195:37406.service: Deactivated successfully. Sep 9 06:57:43.573228 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 06:57:43.575377 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Sep 9 06:57:43.577066 systemd-logind[1558]: Removed session 8. Sep 9 06:57:43.717476 systemd[1]: Started sshd@6-10.230.42.222:22-139.178.68.195:37418.service - OpenSSH per-connection server daemon (139.178.68.195:37418). Sep 9 06:57:44.618594 sshd[1850]: Accepted publickey for core from 139.178.68.195 port 37418 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:44.620507 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:44.628004 systemd-logind[1558]: New session 9 of user core. Sep 9 06:57:44.635368 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 06:57:45.107909 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 06:57:45.108475 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 06:57:45.123408 sudo[1854]: pam_unix(sudo:session): session closed for user root Sep 9 06:57:45.266332 sshd[1853]: Connection closed by 139.178.68.195 port 37418 Sep 9 06:57:45.267571 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:45.273729 systemd[1]: sshd@6-10.230.42.222:22-139.178.68.195:37418.service: Deactivated successfully. Sep 9 06:57:45.276277 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 06:57:45.277684 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Sep 9 06:57:45.279581 systemd-logind[1558]: Removed session 9. Sep 9 06:57:45.434479 systemd[1]: Started sshd@7-10.230.42.222:22-139.178.68.195:37430.service - OpenSSH per-connection server daemon (139.178.68.195:37430). Sep 9 06:57:46.409108 sshd[1860]: Accepted publickey for core from 139.178.68.195 port 37430 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:46.410553 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:46.419036 systemd-logind[1558]: New session 10 of user core. Sep 9 06:57:46.429210 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 06:57:46.919515 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 06:57:46.920678 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 06:57:46.928525 sudo[1865]: pam_unix(sudo:session): session closed for user root Sep 9 06:57:46.937689 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 06:57:46.938215 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 06:57:46.953852 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 06:57:47.013890 augenrules[1887]: No rules Sep 9 06:57:47.016496 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 06:57:47.017031 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 06:57:47.019014 sudo[1864]: pam_unix(sudo:session): session closed for user root Sep 9 06:57:47.127213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 06:57:47.129759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:57:47.173826 sshd[1863]: Connection closed by 139.178.68.195 port 37430 Sep 9 06:57:47.176175 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Sep 9 06:57:47.182568 systemd[1]: sshd@7-10.230.42.222:22-139.178.68.195:37430.service: Deactivated successfully. Sep 9 06:57:47.186097 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 06:57:47.189026 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Sep 9 06:57:47.191329 systemd-logind[1558]: Removed session 10. Sep 9 06:57:47.326775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:57:47.331640 systemd[1]: Started sshd@8-10.230.42.222:22-139.178.68.195:37444.service - OpenSSH per-connection server daemon (139.178.68.195:37444). Sep 9 06:57:47.335556 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 06:57:47.442580 kubelet[1903]: E0909 06:57:47.442306 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 06:57:47.445762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 06:57:47.446057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 06:57:47.446867 systemd[1]: kubelet.service: Consumed 222ms CPU time, 110.6M memory peak. Sep 9 06:57:48.238230 sshd[1905]: Accepted publickey for core from 139.178.68.195 port 37444 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:57:48.239895 sshd-session[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:57:48.247970 systemd-logind[1558]: New session 11 of user core. Sep 9 06:57:48.255183 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 06:57:48.720638 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 06:57:48.721129 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 06:57:49.709058 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 06:57:49.729748 (dockerd)[1932]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 06:57:50.345816 dockerd[1932]: time="2025-09-09T06:57:50.345411799Z" level=info msg="Starting up" Sep 9 06:57:50.348135 dockerd[1932]: time="2025-09-09T06:57:50.348096524Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 06:57:50.558039 dockerd[1932]: time="2025-09-09T06:57:50.557910825Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 06:57:50.588014 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1138907325-merged.mount: Deactivated successfully. Sep 9 06:57:50.619634 dockerd[1932]: time="2025-09-09T06:57:50.619420030Z" level=info msg="Loading containers: start." Sep 9 06:57:50.644130 kernel: Initializing XFRM netlink socket Sep 9 06:57:50.932226 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 9 06:57:50.991075 systemd-networkd[1520]: docker0: Link UP Sep 9 06:57:50.995481 dockerd[1932]: time="2025-09-09T06:57:50.995282325Z" level=info msg="Loading containers: done." Sep 9 06:57:51.020022 dockerd[1932]: time="2025-09-09T06:57:51.018397727Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 06:57:51.020022 dockerd[1932]: time="2025-09-09T06:57:51.018556285Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 06:57:51.020022 dockerd[1932]: time="2025-09-09T06:57:51.018729697Z" level=info msg="Initializing buildkit" Sep 9 06:57:51.020064 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck414434617-merged.mount: Deactivated successfully. Sep 9 06:57:51.061174 dockerd[1932]: time="2025-09-09T06:57:51.061122023Z" level=info msg="Completed buildkit initialization" Sep 9 06:57:51.075731 dockerd[1932]: time="2025-09-09T06:57:51.074942017Z" level=info msg="Daemon has completed initialization" Sep 9 06:57:51.075731 dockerd[1932]: time="2025-09-09T06:57:51.075196650Z" level=info msg="API listen on /run/docker.sock" Sep 9 06:57:51.075361 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 06:57:52.199955 systemd-resolved[1467]: Clock change detected. Flushing caches. Sep 9 06:57:52.201645 systemd-timesyncd[1487]: Contacted time server [2a00:da00:f407:1b00::1]:123 (2.flatcar.pool.ntp.org). Sep 9 06:57:52.201808 systemd-timesyncd[1487]: Initial clock synchronization to Tue 2025-09-09 06:57:52.199159 UTC. Sep 9 06:57:53.451632 containerd[1614]: time="2025-09-09T06:57:53.451424932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 06:57:54.299023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323052094.mount: Deactivated successfully. Sep 9 06:57:56.267576 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 06:57:56.883967 containerd[1614]: time="2025-09-09T06:57:56.883845318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:57:56.900345 containerd[1614]: time="2025-09-09T06:57:56.900270891Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800695" Sep 9 06:57:56.902587 containerd[1614]: time="2025-09-09T06:57:56.902052732Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:57:56.906379 containerd[1614]: time="2025-09-09T06:57:56.906342248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:57:56.907879 containerd[1614]: time="2025-09-09T06:57:56.907838090Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 3.456096989s" Sep 9 06:57:56.908100 containerd[1614]: time="2025-09-09T06:57:56.908061412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 06:57:56.910407 containerd[1614]: time="2025-09-09T06:57:56.910376301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 06:57:58.681710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 06:57:58.687267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:57:59.339908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:57:59.358674 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 06:57:59.466914 kubelet[2215]: E0909 06:57:59.466827 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 06:57:59.469872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 06:57:59.470213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 06:57:59.471665 systemd[1]: kubelet.service: Consumed 355ms CPU time, 110M memory peak. Sep 9 06:57:59.806068 containerd[1614]: time="2025-09-09T06:57:59.805961758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:57:59.807455 containerd[1614]: time="2025-09-09T06:57:59.807414971Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784136" Sep 9 06:57:59.809099 containerd[1614]: time="2025-09-09T06:57:59.808212886Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:57:59.812079 containerd[1614]: time="2025-09-09T06:57:59.811579368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:57:59.813708 containerd[1614]: time="2025-09-09T06:57:59.813033120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.902616851s" Sep 9 06:57:59.813708 containerd[1614]: time="2025-09-09T06:57:59.813111563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 06:57:59.814945 containerd[1614]: time="2025-09-09T06:57:59.814917403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 06:58:02.909162 containerd[1614]: time="2025-09-09T06:58:02.909073883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:02.911268 containerd[1614]: time="2025-09-09T06:58:02.910500388Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175044" Sep 9 06:58:02.911268 containerd[1614]: time="2025-09-09T06:58:02.910958686Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:02.916949 containerd[1614]: time="2025-09-09T06:58:02.916894459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:02.918369 containerd[1614]: time="2025-09-09T06:58:02.918327065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 3.103245858s" Sep 9 06:58:02.918457 containerd[1614]: time="2025-09-09T06:58:02.918379779Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 06:58:02.919225 containerd[1614]: time="2025-09-09T06:58:02.919193021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 06:58:04.988215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265529215.mount: Deactivated successfully. Sep 9 06:58:06.023552 containerd[1614]: time="2025-09-09T06:58:06.022316942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:06.023552 containerd[1614]: time="2025-09-09T06:58:06.023502565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897178" Sep 9 06:58:06.024493 containerd[1614]: time="2025-09-09T06:58:06.024408073Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:06.026695 containerd[1614]: time="2025-09-09T06:58:06.026649366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:06.027626 containerd[1614]: time="2025-09-09T06:58:06.027583117Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 3.107542473s" Sep 9 06:58:06.027829 containerd[1614]: time="2025-09-09T06:58:06.027794825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 06:58:06.029308 containerd[1614]: time="2025-09-09T06:58:06.029281526Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 06:58:06.718724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195700158.mount: Deactivated successfully. Sep 9 06:58:06.766457 update_engine[1560]: I20250909 06:58:06.766230 1560 update_attempter.cc:509] Updating boot flags... Sep 9 06:58:08.503990 containerd[1614]: time="2025-09-09T06:58:08.503897226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:08.505806 containerd[1614]: time="2025-09-09T06:58:08.505447562Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 9 06:58:08.506549 containerd[1614]: time="2025-09-09T06:58:08.506508247Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:08.510222 containerd[1614]: time="2025-09-09T06:58:08.510175603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:08.511844 containerd[1614]: time="2025-09-09T06:58:08.511789280Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.482326895s" Sep 9 06:58:08.511990 containerd[1614]: time="2025-09-09T06:58:08.511961338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 06:58:08.512956 containerd[1614]: time="2025-09-09T06:58:08.512910292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 06:58:09.095642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766353168.mount: Deactivated successfully. Sep 9 06:58:09.102281 containerd[1614]: time="2025-09-09T06:58:09.102186263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 06:58:09.103897 containerd[1614]: time="2025-09-09T06:58:09.103864229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 9 06:58:09.106611 containerd[1614]: time="2025-09-09T06:58:09.105349395Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 06:58:09.108482 containerd[1614]: time="2025-09-09T06:58:09.108439929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 06:58:09.109425 containerd[1614]: time="2025-09-09T06:58:09.109362872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 596.410297ms" Sep 9 06:58:09.109511 containerd[1614]: time="2025-09-09T06:58:09.109430027Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 06:58:09.110865 containerd[1614]: time="2025-09-09T06:58:09.110775280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 06:58:09.680443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 06:58:09.685410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:58:09.820695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148288116.mount: Deactivated successfully. Sep 9 06:58:10.204669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:58:10.218919 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 06:58:10.574187 kubelet[2326]: E0909 06:58:10.573862 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 06:58:10.582340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 06:58:10.582589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 06:58:10.583196 systemd[1]: kubelet.service: Consumed 525ms CPU time, 107.8M memory peak. Sep 9 06:58:12.525517 systemd[1]: Started sshd@9-10.230.42.222:22-123.58.213.127:53262.service - OpenSSH per-connection server daemon (123.58.213.127:53262). Sep 9 06:58:13.328092 containerd[1614]: time="2025-09-09T06:58:13.327987863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:13.329860 containerd[1614]: time="2025-09-09T06:58:13.329488225Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 9 06:58:13.330634 containerd[1614]: time="2025-09-09T06:58:13.330593934Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:13.334120 containerd[1614]: time="2025-09-09T06:58:13.334084729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:13.335786 containerd[1614]: time="2025-09-09T06:58:13.335717382Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.224885931s" Sep 9 06:58:13.335908 containerd[1614]: time="2025-09-09T06:58:13.335882961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 06:58:13.982032 sshd[2375]: Received disconnect from 123.58.213.127 port 53262:11: Bye Bye [preauth] Sep 9 06:58:13.982032 sshd[2375]: Disconnected from authenticating user root 123.58.213.127 port 53262 [preauth] Sep 9 06:58:13.984678 systemd[1]: sshd@9-10.230.42.222:22-123.58.213.127:53262.service: Deactivated successfully. Sep 9 06:58:17.340660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:58:17.341567 systemd[1]: kubelet.service: Consumed 525ms CPU time, 107.8M memory peak. Sep 9 06:58:17.344623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:58:17.382299 systemd[1]: Reload requested from client PID 2409 ('systemctl') (unit session-11.scope)... Sep 9 06:58:17.382450 systemd[1]: Reloading... Sep 9 06:58:17.575108 zram_generator::config[2450]: No configuration found. Sep 9 06:58:17.957204 systemd[1]: Reloading finished in 573 ms. Sep 9 06:58:18.038314 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 06:58:18.038443 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 06:58:18.038987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:58:18.039109 systemd[1]: kubelet.service: Consumed 171ms CPU time, 97.9M memory peak. Sep 9 06:58:18.042005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:58:18.220560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:58:18.233099 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 06:58:18.352346 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 06:58:18.352346 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 06:58:18.352346 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 06:58:18.353222 kubelet[2521]: I0909 06:58:18.352507 2521 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 06:58:18.836120 kubelet[2521]: I0909 06:58:18.835941 2521 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 06:58:18.836120 kubelet[2521]: I0909 06:58:18.836011 2521 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 06:58:18.836658 kubelet[2521]: I0909 06:58:18.836550 2521 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 06:58:18.878074 kubelet[2521]: E0909 06:58:18.877841 2521 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.42.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:18.880797 kubelet[2521]: I0909 06:58:18.880753 2521 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 06:58:18.911879 kubelet[2521]: I0909 06:58:18.911826 2521 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 06:58:18.921841 kubelet[2521]: I0909 06:58:18.921800 2521 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 06:58:18.925339 kubelet[2521]: I0909 06:58:18.925264 2521 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 06:58:18.925716 kubelet[2521]: I0909 06:58:18.925329 2521 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-f5a1c.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 06:58:18.927382 kubelet[2521]: I0909 06:58:18.927342 2521 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 06:58:18.927382 kubelet[2521]: I0909 06:58:18.927374 2521 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 06:58:18.928863 kubelet[2521]: I0909 06:58:18.928791 2521 state_mem.go:36] "Initialized new in-memory state store" Sep 9 06:58:18.933132 kubelet[2521]: I0909 06:58:18.933097 2521 kubelet.go:446] "Attempting to sync node with API server" Sep 9 06:58:18.933232 kubelet[2521]: I0909 06:58:18.933169 2521 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 06:58:18.935254 kubelet[2521]: I0909 06:58:18.934810 2521 kubelet.go:352] "Adding apiserver pod source" Sep 9 06:58:18.935254 kubelet[2521]: I0909 06:58:18.934870 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 06:58:18.940967 kubelet[2521]: W0909 06:58:18.940892 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.42.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f5a1c.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:18.941553 kubelet[2521]: E0909 06:58:18.941521 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.42.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f5a1c.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:18.943000 kubelet[2521]: I0909 06:58:18.942975 2521 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 06:58:18.946495 kubelet[2521]: I0909 06:58:18.946471 2521 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 06:58:18.947898 kubelet[2521]: W0909 06:58:18.947867 2521 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 06:58:18.950338 kubelet[2521]: I0909 06:58:18.950316 2521 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 06:58:18.950516 kubelet[2521]: I0909 06:58:18.950491 2521 server.go:1287] "Started kubelet" Sep 9 06:58:18.954211 kubelet[2521]: W0909 06:58:18.953737 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.42.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:18.954211 kubelet[2521]: E0909 06:58:18.953793 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.42.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:18.954211 kubelet[2521]: I0909 06:58:18.954001 2521 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 06:58:18.956439 kubelet[2521]: I0909 06:58:18.956368 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 06:58:18.957109 kubelet[2521]: I0909 06:58:18.957087 2521 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 06:58:18.962000 kubelet[2521]: E0909 06:58:18.958963 2521 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.42.222:6443/api/v1/namespaces/default/events\": dial tcp 10.230.42.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-f5a1c.gb1.brightbox.com.18638afd06d2bd6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-f5a1c.gb1.brightbox.com,UID:srv-f5a1c.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-f5a1c.gb1.brightbox.com,},FirstTimestamp:2025-09-09 06:58:18.950458731 +0000 UTC m=+0.711951003,LastTimestamp:2025-09-09 06:58:18.950458731 +0000 UTC m=+0.711951003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-f5a1c.gb1.brightbox.com,}" Sep 9 06:58:18.964023 kubelet[2521]: I0909 06:58:18.963909 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 06:58:18.976982 kubelet[2521]: I0909 06:58:18.972170 2521 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 06:58:18.976982 kubelet[2521]: E0909 06:58:18.972553 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" Sep 9 06:58:18.976982 kubelet[2521]: I0909 06:58:18.974303 2521 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 06:58:18.976982 kubelet[2521]: I0909 06:58:18.974409 2521 reconciler.go:26] "Reconciler: start to sync state" Sep 9 06:58:19.025087 kubelet[2521]: I0909 06:58:19.023213 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 06:58:19.025087 kubelet[2521]: I0909 06:58:19.023600 2521 server.go:479] "Adding debug handlers to kubelet server" Sep 9 06:58:19.026969 kubelet[2521]: W0909 06:58:19.026916 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.42.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:19.027171 kubelet[2521]: E0909 06:58:19.027127 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.42.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:19.027540 kubelet[2521]: E0909 06:58:19.027405 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.42.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f5a1c.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.42.222:6443: connect: connection refused" interval="200ms" Sep 9 06:58:19.029612 kubelet[2521]: I0909 06:58:19.029586 2521 factory.go:221] Registration of the systemd container factory successfully Sep 9 06:58:19.031012 kubelet[2521]: I0909 06:58:19.030983 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 06:58:19.043731 kubelet[2521]: I0909 06:58:19.043624 2521 factory.go:221] Registration of the containerd container factory successfully Sep 9 06:58:19.055660 kubelet[2521]: I0909 06:58:19.055462 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 06:58:19.057312 kubelet[2521]: I0909 06:58:19.057288 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 06:58:19.057491 kubelet[2521]: I0909 06:58:19.057469 2521 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 06:58:19.057668 kubelet[2521]: I0909 06:58:19.057646 2521 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 06:58:19.058246 kubelet[2521]: I0909 06:58:19.057775 2521 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 06:58:19.058246 kubelet[2521]: E0909 06:58:19.057874 2521 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 06:58:19.074077 kubelet[2521]: E0909 06:58:19.074026 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" Sep 9 06:58:19.079116 kubelet[2521]: W0909 06:58:19.079057 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.42.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:19.079333 kubelet[2521]: E0909 06:58:19.079300 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.42.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:19.084854 kubelet[2521]: I0909 06:58:19.084829 2521 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 06:58:19.085089 kubelet[2521]: I0909 06:58:19.085065 2521 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 06:58:19.085280 kubelet[2521]: I0909 06:58:19.085260 2521 state_mem.go:36] "Initialized new in-memory state store" Sep 9 06:58:19.089128 kubelet[2521]: I0909 06:58:19.088092 2521 policy_none.go:49] "None policy: Start" Sep 9 06:58:19.089607 kubelet[2521]: I0909 06:58:19.089282 2521 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 06:58:19.089607 kubelet[2521]: I0909 06:58:19.089329 2521 state_mem.go:35] "Initializing new in-memory state store" Sep 9 06:58:19.100788 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 06:58:19.113469 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 06:58:19.119211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 06:58:19.131705 kubelet[2521]: I0909 06:58:19.131665 2521 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 06:58:19.132420 kubelet[2521]: I0909 06:58:19.132374 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 06:58:19.133461 kubelet[2521]: I0909 06:58:19.132418 2521 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 06:58:19.137367 kubelet[2521]: E0909 06:58:19.137326 2521 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 06:58:19.137875 kubelet[2521]: I0909 06:58:19.137638 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 06:58:19.138371 kubelet[2521]: E0909 06:58:19.138104 2521 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-f5a1c.gb1.brightbox.com\" not found" Sep 9 06:58:19.175366 kubelet[2521]: I0909 06:58:19.175177 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67846797bccbaffffc79e0f25c07c5d3-kubeconfig\") pod \"kube-scheduler-srv-f5a1c.gb1.brightbox.com\" (UID: \"67846797bccbaffffc79e0f25c07c5d3\") " pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.175366 kubelet[2521]: I0909 06:58:19.175246 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3e7a35643cebcea579edf608c78b5ac-ca-certs\") pod \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" (UID: \"b3e7a35643cebcea579edf608c78b5ac\") " pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.175366 kubelet[2521]: I0909 06:58:19.175277 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3e7a35643cebcea579edf608c78b5ac-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" (UID: \"b3e7a35643cebcea579edf608c78b5ac\") " pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.175366 kubelet[2521]: I0909 06:58:19.175310 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-flexvolume-dir\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.175366 kubelet[2521]: I0909 06:58:19.175337 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3e7a35643cebcea579edf608c78b5ac-k8s-certs\") pod \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" (UID: \"b3e7a35643cebcea579edf608c78b5ac\") " pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.176383 kubelet[2521]: I0909 06:58:19.175363 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-ca-certs\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.176383 kubelet[2521]: I0909 06:58:19.175389 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-k8s-certs\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.176383 kubelet[2521]: I0909 06:58:19.175415 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-kubeconfig\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.176383 kubelet[2521]: I0909 06:58:19.175443 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.175751 systemd[1]: Created slice kubepods-burstable-pod67846797bccbaffffc79e0f25c07c5d3.slice - libcontainer container kubepods-burstable-pod67846797bccbaffffc79e0f25c07c5d3.slice. Sep 9 06:58:19.199555 kubelet[2521]: E0909 06:58:19.199502 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.205872 systemd[1]: Created slice kubepods-burstable-podbc9bf00fada7b8dc8c27db6101a80089.slice - libcontainer container kubepods-burstable-podbc9bf00fada7b8dc8c27db6101a80089.slice. Sep 9 06:58:19.209840 kubelet[2521]: E0909 06:58:19.209800 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.212915 systemd[1]: Created slice kubepods-burstable-podb3e7a35643cebcea579edf608c78b5ac.slice - libcontainer container kubepods-burstable-podb3e7a35643cebcea579edf608c78b5ac.slice. Sep 9 06:58:19.216210 kubelet[2521]: E0909 06:58:19.216177 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.229044 kubelet[2521]: E0909 06:58:19.228966 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.42.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f5a1c.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.42.222:6443: connect: connection refused" interval="400ms" Sep 9 06:58:19.236657 kubelet[2521]: I0909 06:58:19.236014 2521 kubelet_node_status.go:75] "Attempting to register node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.236657 kubelet[2521]: E0909 06:58:19.236612 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.42.222:6443/api/v1/nodes\": dial tcp 10.230.42.222:6443: connect: connection refused" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.442958 kubelet[2521]: I0909 06:58:19.442438 2521 kubelet_node_status.go:75] "Attempting to register node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.442958 kubelet[2521]: E0909 06:58:19.442903 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.42.222:6443/api/v1/nodes\": dial tcp 10.230.42.222:6443: connect: connection refused" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.505134 containerd[1614]: time="2025-09-09T06:58:19.504953543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f5a1c.gb1.brightbox.com,Uid:67846797bccbaffffc79e0f25c07c5d3,Namespace:kube-system,Attempt:0,}" Sep 9 06:58:19.517790 containerd[1614]: time="2025-09-09T06:58:19.517684761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f5a1c.gb1.brightbox.com,Uid:bc9bf00fada7b8dc8c27db6101a80089,Namespace:kube-system,Attempt:0,}" Sep 9 06:58:19.520845 containerd[1614]: time="2025-09-09T06:58:19.520801433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f5a1c.gb1.brightbox.com,Uid:b3e7a35643cebcea579edf608c78b5ac,Namespace:kube-system,Attempt:0,}" Sep 9 06:58:19.631040 kubelet[2521]: E0909 06:58:19.630920 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.42.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f5a1c.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.42.222:6443: connect: connection refused" interval="800ms" Sep 9 06:58:19.709737 containerd[1614]: time="2025-09-09T06:58:19.709251876Z" level=info msg="connecting to shim 4444d9e64505d3921c68fa269d3108414342f9d2214ae5addd57e918c0e3c839" address="unix:///run/containerd/s/230e2ace270e34d2d33f8c52ac773d3bb5cda7086707bd283a8fb961a48cc15f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:19.719695 containerd[1614]: time="2025-09-09T06:58:19.719639947Z" level=info msg="connecting to shim 7b455be0bf0f392dc81dde32addb1b5a169a903b51de01239219ad6573c4dc9e" address="unix:///run/containerd/s/d4b3f4ea9dbfb38f9462378daae0ad3cbc6e563a197963ad4ba06c59bc9ad3d6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:19.722259 containerd[1614]: time="2025-09-09T06:58:19.721882789Z" level=info msg="connecting to shim 1f0469ad5bf52f38182f9497f80185401734d081ef0be245c2f563b42d8d9a1e" address="unix:///run/containerd/s/c08e225467c11282417a8b889f821d6c196c832eb055075e6079a52ca348ac42" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:19.848323 kubelet[2521]: I0909 06:58:19.848246 2521 kubelet_node_status.go:75] "Attempting to register node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.849836 kubelet[2521]: E0909 06:58:19.849550 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.42.222:6443/api/v1/nodes\": dial tcp 10.230.42.222:6443: connect: connection refused" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:19.850302 systemd[1]: Started cri-containerd-7b455be0bf0f392dc81dde32addb1b5a169a903b51de01239219ad6573c4dc9e.scope - libcontainer container 7b455be0bf0f392dc81dde32addb1b5a169a903b51de01239219ad6573c4dc9e. Sep 9 06:58:19.869426 systemd[1]: Started cri-containerd-1f0469ad5bf52f38182f9497f80185401734d081ef0be245c2f563b42d8d9a1e.scope - libcontainer container 1f0469ad5bf52f38182f9497f80185401734d081ef0be245c2f563b42d8d9a1e. Sep 9 06:58:19.872667 systemd[1]: Started cri-containerd-4444d9e64505d3921c68fa269d3108414342f9d2214ae5addd57e918c0e3c839.scope - libcontainer container 4444d9e64505d3921c68fa269d3108414342f9d2214ae5addd57e918c0e3c839. Sep 9 06:58:20.012976 containerd[1614]: time="2025-09-09T06:58:20.012622334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f5a1c.gb1.brightbox.com,Uid:bc9bf00fada7b8dc8c27db6101a80089,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f0469ad5bf52f38182f9497f80185401734d081ef0be245c2f563b42d8d9a1e\"" Sep 9 06:58:20.020197 containerd[1614]: time="2025-09-09T06:58:20.020118529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f5a1c.gb1.brightbox.com,Uid:b3e7a35643cebcea579edf608c78b5ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"4444d9e64505d3921c68fa269d3108414342f9d2214ae5addd57e918c0e3c839\"" Sep 9 06:58:20.022091 containerd[1614]: time="2025-09-09T06:58:20.021069771Z" level=info msg="CreateContainer within sandbox \"1f0469ad5bf52f38182f9497f80185401734d081ef0be245c2f563b42d8d9a1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 06:58:20.025927 containerd[1614]: time="2025-09-09T06:58:20.025893368Z" level=info msg="CreateContainer within sandbox \"4444d9e64505d3921c68fa269d3108414342f9d2214ae5addd57e918c0e3c839\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 06:58:20.057268 containerd[1614]: time="2025-09-09T06:58:20.057214044Z" level=info msg="Container 83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:20.074576 containerd[1614]: time="2025-09-09T06:58:20.074493458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f5a1c.gb1.brightbox.com,Uid:67846797bccbaffffc79e0f25c07c5d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b455be0bf0f392dc81dde32addb1b5a169a903b51de01239219ad6573c4dc9e\"" Sep 9 06:58:20.080083 containerd[1614]: time="2025-09-09T06:58:20.079279560Z" level=info msg="Container cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:20.080083 containerd[1614]: time="2025-09-09T06:58:20.079383041Z" level=info msg="CreateContainer within sandbox \"7b455be0bf0f392dc81dde32addb1b5a169a903b51de01239219ad6573c4dc9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 06:58:20.085811 containerd[1614]: time="2025-09-09T06:58:20.085687906Z" level=info msg="CreateContainer within sandbox \"1f0469ad5bf52f38182f9497f80185401734d081ef0be245c2f563b42d8d9a1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958\"" Sep 9 06:58:20.087098 containerd[1614]: time="2025-09-09T06:58:20.087070009Z" level=info msg="StartContainer for \"83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958\"" Sep 9 06:58:20.093578 containerd[1614]: time="2025-09-09T06:58:20.093450468Z" level=info msg="connecting to shim 83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958" address="unix:///run/containerd/s/c08e225467c11282417a8b889f821d6c196c832eb055075e6079a52ca348ac42" protocol=ttrpc version=3 Sep 9 06:58:20.100230 containerd[1614]: time="2025-09-09T06:58:20.100194585Z" level=info msg="Container cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:20.103092 containerd[1614]: time="2025-09-09T06:58:20.103005662Z" level=info msg="CreateContainer within sandbox \"4444d9e64505d3921c68fa269d3108414342f9d2214ae5addd57e918c0e3c839\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e\"" Sep 9 06:58:20.103713 containerd[1614]: time="2025-09-09T06:58:20.103668366Z" level=info msg="StartContainer for \"cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e\"" Sep 9 06:58:20.107997 containerd[1614]: time="2025-09-09T06:58:20.107896676Z" level=info msg="connecting to shim cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e" address="unix:///run/containerd/s/230e2ace270e34d2d33f8c52ac773d3bb5cda7086707bd283a8fb961a48cc15f" protocol=ttrpc version=3 Sep 9 06:58:20.111818 containerd[1614]: time="2025-09-09T06:58:20.111780885Z" level=info msg="CreateContainer within sandbox \"7b455be0bf0f392dc81dde32addb1b5a169a903b51de01239219ad6573c4dc9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041\"" Sep 9 06:58:20.113794 containerd[1614]: time="2025-09-09T06:58:20.113759295Z" level=info msg="StartContainer for \"cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041\"" Sep 9 06:58:20.120351 containerd[1614]: time="2025-09-09T06:58:20.120315306Z" level=info msg="connecting to shim cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041" address="unix:///run/containerd/s/d4b3f4ea9dbfb38f9462378daae0ad3cbc6e563a197963ad4ba06c59bc9ad3d6" protocol=ttrpc version=3 Sep 9 06:58:20.130557 systemd[1]: Started cri-containerd-83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958.scope - libcontainer container 83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958. Sep 9 06:58:20.143494 systemd[1]: Started cri-containerd-cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e.scope - libcontainer container cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e. Sep 9 06:58:20.184285 systemd[1]: Started cri-containerd-cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041.scope - libcontainer container cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041. Sep 9 06:58:20.230418 kubelet[2521]: W0909 06:58:20.230300 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.42.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:20.230603 kubelet[2521]: E0909 06:58:20.230438 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.42.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:20.278618 kubelet[2521]: W0909 06:58:20.277302 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.42.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:20.278618 kubelet[2521]: E0909 06:58:20.277524 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.42.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:20.285625 containerd[1614]: time="2025-09-09T06:58:20.285378366Z" level=info msg="StartContainer for \"83f95c8ce816db04a71cab722a5aa2f2e2d684cc301e86e7939d959f1db01958\" returns successfully" Sep 9 06:58:20.305661 containerd[1614]: time="2025-09-09T06:58:20.305605875Z" level=info msg="StartContainer for \"cc940fae392998312f0c154503d146057443936f901a54a8a01c58ebf59dbd5e\" returns successfully" Sep 9 06:58:20.346569 containerd[1614]: time="2025-09-09T06:58:20.346500705Z" level=info msg="StartContainer for \"cf8bb4d42e685b7311329e637e27868fa1d7b8f234005eec82307b0bb3e96041\" returns successfully" Sep 9 06:58:20.402898 kubelet[2521]: W0909 06:58:20.402143 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.42.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:20.402898 kubelet[2521]: E0909 06:58:20.402840 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.42.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:20.433078 kubelet[2521]: E0909 06:58:20.432687 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.42.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f5a1c.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.42.222:6443: connect: connection refused" interval="1.6s" Sep 9 06:58:20.434235 kubelet[2521]: W0909 06:58:20.433821 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.42.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f5a1c.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.42.222:6443: connect: connection refused Sep 9 06:58:20.434235 kubelet[2521]: E0909 06:58:20.434182 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.42.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f5a1c.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.42.222:6443: connect: connection refused" logger="UnhandledError" Sep 9 06:58:20.654996 kubelet[2521]: I0909 06:58:20.654846 2521 kubelet_node_status.go:75] "Attempting to register node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:20.657399 kubelet[2521]: E0909 06:58:20.657328 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.42.222:6443/api/v1/nodes\": dial tcp 10.230.42.222:6443: connect: connection refused" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:21.111064 kubelet[2521]: E0909 06:58:21.110766 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:21.116412 kubelet[2521]: E0909 06:58:21.116215 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:21.120290 kubelet[2521]: E0909 06:58:21.120268 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:22.124317 kubelet[2521]: E0909 06:58:22.123146 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:22.124317 kubelet[2521]: E0909 06:58:22.123676 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:22.130622 kubelet[2521]: E0909 06:58:22.130426 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:22.260933 kubelet[2521]: I0909 06:58:22.260883 2521 kubelet_node_status.go:75] "Attempting to register node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.124129 kubelet[2521]: E0909 06:58:23.124017 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.124626 kubelet[2521]: E0909 06:58:23.124600 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.159004 kubelet[2521]: E0909 06:58:23.158929 2521 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-f5a1c.gb1.brightbox.com\" not found" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.226107 kubelet[2521]: I0909 06:58:23.223814 2521 kubelet_node_status.go:78] "Successfully registered node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.226107 kubelet[2521]: E0909 06:58:23.223870 2521 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-f5a1c.gb1.brightbox.com\": node \"srv-f5a1c.gb1.brightbox.com\" not found" Sep 9 06:58:23.318872 kubelet[2521]: E0909 06:58:23.318799 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-f5a1c.gb1.brightbox.com\" not found" Sep 9 06:58:23.475200 kubelet[2521]: I0909 06:58:23.475127 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.491673 kubelet[2521]: E0909 06:58:23.491324 2521 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-f5a1c.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.491673 kubelet[2521]: I0909 06:58:23.491373 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.495274 kubelet[2521]: E0909 06:58:23.495245 2521 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.495587 kubelet[2521]: I0909 06:58:23.495387 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.497935 kubelet[2521]: E0909 06:58:23.497910 2521 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:23.951864 kubelet[2521]: I0909 06:58:23.951812 2521 apiserver.go:52] "Watching apiserver" Sep 9 06:58:23.975432 kubelet[2521]: I0909 06:58:23.975380 2521 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 06:58:24.125418 kubelet[2521]: I0909 06:58:24.125367 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:24.134147 kubelet[2521]: W0909 06:58:24.134097 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:25.001334 kubelet[2521]: I0909 06:58:25.001179 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:25.009276 kubelet[2521]: W0909 06:58:25.008614 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:25.279200 systemd[1]: Reload requested from client PID 2791 ('systemctl') (unit session-11.scope)... Sep 9 06:58:25.279879 systemd[1]: Reloading... Sep 9 06:58:25.448186 zram_generator::config[2845]: No configuration found. Sep 9 06:58:25.798344 systemd[1]: Reloading finished in 517 ms. Sep 9 06:58:25.851788 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:58:25.861436 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 06:58:25.861895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:58:25.861973 systemd[1]: kubelet.service: Consumed 1.256s CPU time, 130.4M memory peak. Sep 9 06:58:25.865626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 06:58:26.181148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 06:58:26.194923 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 06:58:26.274849 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 06:58:26.277103 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 06:58:26.277103 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 06:58:26.277103 kubelet[2900]: I0909 06:58:26.277010 2900 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 06:58:26.297122 kubelet[2900]: I0909 06:58:26.296824 2900 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 06:58:26.297122 kubelet[2900]: I0909 06:58:26.296881 2900 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 06:58:26.297543 kubelet[2900]: I0909 06:58:26.297410 2900 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 06:58:26.301097 kubelet[2900]: I0909 06:58:26.301017 2900 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 06:58:26.306966 kubelet[2900]: I0909 06:58:26.306309 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 06:58:26.333726 kubelet[2900]: I0909 06:58:26.333572 2900 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 06:58:26.340253 kubelet[2900]: I0909 06:58:26.340171 2900 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 06:58:26.340676 kubelet[2900]: I0909 06:58:26.340596 2900 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 06:58:26.340894 kubelet[2900]: I0909 06:58:26.340677 2900 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-f5a1c.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 06:58:26.341081 kubelet[2900]: I0909 06:58:26.340912 2900 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 06:58:26.341081 kubelet[2900]: I0909 06:58:26.340931 2900 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 06:58:26.341081 kubelet[2900]: I0909 06:58:26.341009 2900 state_mem.go:36] "Initialized new in-memory state store" Sep 9 06:58:26.341304 kubelet[2900]: I0909 06:58:26.341276 2900 kubelet.go:446] "Attempting to sync node with API server" Sep 9 06:58:26.352117 kubelet[2900]: I0909 06:58:26.351873 2900 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 06:58:26.352117 kubelet[2900]: I0909 06:58:26.351943 2900 kubelet.go:352] "Adding apiserver pod source" Sep 9 06:58:26.352117 kubelet[2900]: I0909 06:58:26.351964 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 06:58:26.356563 kubelet[2900]: I0909 06:58:26.356512 2900 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 06:58:26.358212 kubelet[2900]: I0909 06:58:26.357119 2900 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 06:58:26.358212 kubelet[2900]: I0909 06:58:26.357716 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 06:58:26.358212 kubelet[2900]: I0909 06:58:26.357752 2900 server.go:1287] "Started kubelet" Sep 9 06:58:26.378878 kubelet[2900]: I0909 06:58:26.378201 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 06:58:26.383659 kubelet[2900]: I0909 06:58:26.383555 2900 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 06:58:26.400837 kubelet[2900]: I0909 06:58:26.398886 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 06:58:26.404224 kubelet[2900]: I0909 06:58:26.404094 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 06:58:26.407107 kubelet[2900]: I0909 06:58:26.405913 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 06:58:26.415357 kubelet[2900]: I0909 06:58:26.406893 2900 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 06:58:26.415357 kubelet[2900]: I0909 06:58:26.407358 2900 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 06:58:26.415357 kubelet[2900]: I0909 06:58:26.414935 2900 server.go:479] "Adding debug handlers to kubelet server" Sep 9 06:58:26.419075 kubelet[2900]: I0909 06:58:26.418733 2900 reconciler.go:26] "Reconciler: start to sync state" Sep 9 06:58:26.419493 kubelet[2900]: I0909 06:58:26.419468 2900 factory.go:221] Registration of the systemd container factory successfully Sep 9 06:58:26.420003 kubelet[2900]: I0909 06:58:26.419967 2900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 06:58:26.429315 kubelet[2900]: E0909 06:58:26.429227 2900 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 06:58:26.431733 kubelet[2900]: I0909 06:58:26.430928 2900 factory.go:221] Registration of the containerd container factory successfully Sep 9 06:58:26.441816 kubelet[2900]: I0909 06:58:26.441749 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 06:58:26.474212 kubelet[2900]: I0909 06:58:26.474158 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 06:58:26.474376 kubelet[2900]: I0909 06:58:26.474237 2900 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 06:58:26.474376 kubelet[2900]: I0909 06:58:26.474293 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 06:58:26.474376 kubelet[2900]: I0909 06:58:26.474308 2900 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 06:58:26.484874 kubelet[2900]: E0909 06:58:26.482500 2900 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 06:58:26.584243 kubelet[2900]: E0909 06:58:26.584192 2900 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588444 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588469 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588507 2900 state_mem.go:36] "Initialized new in-memory state store" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588808 2900 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588827 2900 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588856 2900 policy_none.go:49] "None policy: Start" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588873 2900 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.588890 2900 state_mem.go:35] "Initializing new in-memory state store" Sep 9 06:58:26.590114 kubelet[2900]: I0909 06:58:26.589100 2900 state_mem.go:75] "Updated machine memory state" Sep 9 06:58:26.599069 kubelet[2900]: I0909 06:58:26.598992 2900 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 06:58:26.604846 kubelet[2900]: I0909 06:58:26.604809 2900 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 06:58:26.604954 kubelet[2900]: I0909 06:58:26.604847 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 06:58:26.606671 kubelet[2900]: I0909 06:58:26.606259 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 06:58:26.613014 kubelet[2900]: E0909 06:58:26.612979 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 06:58:26.735909 kubelet[2900]: I0909 06:58:26.735741 2900 kubelet_node_status.go:75] "Attempting to register node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.750507 kubelet[2900]: I0909 06:58:26.749189 2900 kubelet_node_status.go:124] "Node was previously registered" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.750507 kubelet[2900]: I0909 06:58:26.749370 2900 kubelet_node_status.go:78] "Successfully registered node" node="srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.789151 kubelet[2900]: I0909 06:58:26.789098 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.789151 kubelet[2900]: I0909 06:58:26.789162 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.791795 kubelet[2900]: I0909 06:58:26.791001 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.804998 kubelet[2900]: W0909 06:58:26.803183 2900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:26.806987 kubelet[2900]: W0909 06:58:26.805461 2900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:26.806987 kubelet[2900]: E0909 06:58:26.805526 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.806987 kubelet[2900]: W0909 06:58:26.805960 2900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:26.806987 kubelet[2900]: E0909 06:58:26.806082 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.822656 kubelet[2900]: I0909 06:58:26.822611 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-flexvolume-dir\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.822779 kubelet[2900]: I0909 06:58:26.822665 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-kubeconfig\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.822779 kubelet[2900]: I0909 06:58:26.822711 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67846797bccbaffffc79e0f25c07c5d3-kubeconfig\") pod \"kube-scheduler-srv-f5a1c.gb1.brightbox.com\" (UID: \"67846797bccbaffffc79e0f25c07c5d3\") " pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.822779 kubelet[2900]: I0909 06:58:26.822736 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3e7a35643cebcea579edf608c78b5ac-ca-certs\") pod \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" (UID: \"b3e7a35643cebcea579edf608c78b5ac\") " pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.822779 kubelet[2900]: I0909 06:58:26.822776 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3e7a35643cebcea579edf608c78b5ac-k8s-certs\") pod \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" (UID: \"b3e7a35643cebcea579edf608c78b5ac\") " pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.823089 kubelet[2900]: I0909 06:58:26.822800 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3e7a35643cebcea579edf608c78b5ac-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" (UID: \"b3e7a35643cebcea579edf608c78b5ac\") " pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.823089 kubelet[2900]: I0909 06:58:26.822837 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-ca-certs\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.823089 kubelet[2900]: I0909 06:58:26.822861 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-k8s-certs\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:26.823089 kubelet[2900]: I0909 06:58:26.822904 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc9bf00fada7b8dc8c27db6101a80089-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f5a1c.gb1.brightbox.com\" (UID: \"bc9bf00fada7b8dc8c27db6101a80089\") " pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:27.357079 kubelet[2900]: I0909 06:58:27.354634 2900 apiserver.go:52] "Watching apiserver" Sep 9 06:58:27.416179 kubelet[2900]: I0909 06:58:27.416143 2900 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 06:58:27.422025 kubelet[2900]: I0909 06:58:27.421845 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" podStartSLOduration=1.421785535 podStartE2EDuration="1.421785535s" podCreationTimestamp="2025-09-09 06:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 06:58:27.402792622 +0000 UTC m=+1.199969206" watchObservedRunningTime="2025-09-09 06:58:27.421785535 +0000 UTC m=+1.218962116" Sep 9 06:58:27.448323 kubelet[2900]: I0909 06:58:27.447917 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" podStartSLOduration=3.447761682 podStartE2EDuration="3.447761682s" podCreationTimestamp="2025-09-09 06:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 06:58:27.423597582 +0000 UTC m=+1.220774167" watchObservedRunningTime="2025-09-09 06:58:27.447761682 +0000 UTC m=+1.244938256" Sep 9 06:58:27.449570 kubelet[2900]: I0909 06:58:27.449393 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-f5a1c.gb1.brightbox.com" podStartSLOduration=2.449382733 podStartE2EDuration="2.449382733s" podCreationTimestamp="2025-09-09 06:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 06:58:27.446848014 +0000 UTC m=+1.244024603" watchObservedRunningTime="2025-09-09 06:58:27.449382733 +0000 UTC m=+1.246559325" Sep 9 06:58:27.553669 kubelet[2900]: I0909 06:58:27.553441 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:27.555611 kubelet[2900]: I0909 06:58:27.554056 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:27.569440 kubelet[2900]: W0909 06:58:27.569109 2900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:27.569440 kubelet[2900]: E0909 06:58:27.569185 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-f5a1c.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:27.577320 kubelet[2900]: W0909 06:58:27.577262 2900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 06:58:27.577468 kubelet[2900]: E0909 06:58:27.577353 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-f5a1c.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-f5a1c.gb1.brightbox.com" Sep 9 06:58:32.132285 kubelet[2900]: I0909 06:58:32.132222 2900 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 06:58:32.133501 containerd[1614]: time="2025-09-09T06:58:32.133323044Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 06:58:32.134576 kubelet[2900]: I0909 06:58:32.134120 2900 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 06:58:33.000975 systemd[1]: Created slice kubepods-besteffort-pod2cb62c3b_65b2_4011_a044_9ee74630739b.slice - libcontainer container kubepods-besteffort-pod2cb62c3b_65b2_4011_a044_9ee74630739b.slice. Sep 9 06:58:33.069062 kubelet[2900]: I0909 06:58:33.068997 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cb62c3b-65b2-4011-a044-9ee74630739b-lib-modules\") pod \"kube-proxy-xxlm6\" (UID: \"2cb62c3b-65b2-4011-a044-9ee74630739b\") " pod="kube-system/kube-proxy-xxlm6" Sep 9 06:58:33.069551 kubelet[2900]: I0909 06:58:33.069356 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cb62c3b-65b2-4011-a044-9ee74630739b-kube-proxy\") pod \"kube-proxy-xxlm6\" (UID: \"2cb62c3b-65b2-4011-a044-9ee74630739b\") " pod="kube-system/kube-proxy-xxlm6" Sep 9 06:58:33.069551 kubelet[2900]: I0909 06:58:33.069427 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cb62c3b-65b2-4011-a044-9ee74630739b-xtables-lock\") pod \"kube-proxy-xxlm6\" (UID: \"2cb62c3b-65b2-4011-a044-9ee74630739b\") " pod="kube-system/kube-proxy-xxlm6" Sep 9 06:58:33.069551 kubelet[2900]: I0909 06:58:33.069500 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rr7\" (UniqueName: \"kubernetes.io/projected/2cb62c3b-65b2-4011-a044-9ee74630739b-kube-api-access-x6rr7\") pod \"kube-proxy-xxlm6\" (UID: \"2cb62c3b-65b2-4011-a044-9ee74630739b\") " pod="kube-system/kube-proxy-xxlm6" Sep 9 06:58:33.254411 systemd[1]: Created slice kubepods-besteffort-pod969c5710_7f88_434d_a350_9fd2c3303ef9.slice - libcontainer container kubepods-besteffort-pod969c5710_7f88_434d_a350_9fd2c3303ef9.slice. Sep 9 06:58:33.271756 kubelet[2900]: I0909 06:58:33.271697 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/969c5710-7f88-434d-a350-9fd2c3303ef9-var-lib-calico\") pod \"tigera-operator-755d956888-ftkjz\" (UID: \"969c5710-7f88-434d-a350-9fd2c3303ef9\") " pod="tigera-operator/tigera-operator-755d956888-ftkjz" Sep 9 06:58:33.271756 kubelet[2900]: I0909 06:58:33.271748 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdgw\" (UniqueName: \"kubernetes.io/projected/969c5710-7f88-434d-a350-9fd2c3303ef9-kube-api-access-grdgw\") pod \"tigera-operator-755d956888-ftkjz\" (UID: \"969c5710-7f88-434d-a350-9fd2c3303ef9\") " pod="tigera-operator/tigera-operator-755d956888-ftkjz" Sep 9 06:58:33.311614 containerd[1614]: time="2025-09-09T06:58:33.311561693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxlm6,Uid:2cb62c3b-65b2-4011-a044-9ee74630739b,Namespace:kube-system,Attempt:0,}" Sep 9 06:58:33.342898 containerd[1614]: time="2025-09-09T06:58:33.342236863Z" level=info msg="connecting to shim 79191976724abb4fccff0d012bb8ab0cf9e4b39346e8d2b69880f83f70340c2b" address="unix:///run/containerd/s/ffffca1a9b171d1bdb2c9ec597de530d36766afa691395207dc8b16a3140f784" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:33.381238 systemd[1]: Started cri-containerd-79191976724abb4fccff0d012bb8ab0cf9e4b39346e8d2b69880f83f70340c2b.scope - libcontainer container 79191976724abb4fccff0d012bb8ab0cf9e4b39346e8d2b69880f83f70340c2b. Sep 9 06:58:33.448194 containerd[1614]: time="2025-09-09T06:58:33.448093177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxlm6,Uid:2cb62c3b-65b2-4011-a044-9ee74630739b,Namespace:kube-system,Attempt:0,} returns sandbox id \"79191976724abb4fccff0d012bb8ab0cf9e4b39346e8d2b69880f83f70340c2b\"" Sep 9 06:58:33.453678 containerd[1614]: time="2025-09-09T06:58:33.453552802Z" level=info msg="CreateContainer within sandbox \"79191976724abb4fccff0d012bb8ab0cf9e4b39346e8d2b69880f83f70340c2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 06:58:33.471061 containerd[1614]: time="2025-09-09T06:58:33.470233305Z" level=info msg="Container c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:33.481463 containerd[1614]: time="2025-09-09T06:58:33.481405361Z" level=info msg="CreateContainer within sandbox \"79191976724abb4fccff0d012bb8ab0cf9e4b39346e8d2b69880f83f70340c2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a\"" Sep 9 06:58:33.482623 containerd[1614]: time="2025-09-09T06:58:33.482595385Z" level=info msg="StartContainer for \"c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a\"" Sep 9 06:58:33.487290 containerd[1614]: time="2025-09-09T06:58:33.487250357Z" level=info msg="connecting to shim c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a" address="unix:///run/containerd/s/ffffca1a9b171d1bdb2c9ec597de530d36766afa691395207dc8b16a3140f784" protocol=ttrpc version=3 Sep 9 06:58:33.488719 systemd[1]: Started sshd@10-10.230.42.222:22-117.220.10.3:43041.service - OpenSSH per-connection server daemon (117.220.10.3:43041). Sep 9 06:58:33.529312 systemd[1]: Started cri-containerd-c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a.scope - libcontainer container c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a. Sep 9 06:58:33.563442 containerd[1614]: time="2025-09-09T06:58:33.563335006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-ftkjz,Uid:969c5710-7f88-434d-a350-9fd2c3303ef9,Namespace:tigera-operator,Attempt:0,}" Sep 9 06:58:33.614396 containerd[1614]: time="2025-09-09T06:58:33.614315794Z" level=info msg="connecting to shim 3b036b569128f8cd5e4ef0b022311a0ffa773f5bb2eef651bb70f26f34109174" address="unix:///run/containerd/s/c06f29f548369e3f2630f214aaf9ac4b820e6d627fcd90a0b7c93308cdb38cda" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:33.623571 containerd[1614]: time="2025-09-09T06:58:33.623413666Z" level=info msg="StartContainer for \"c713be1352b7dc8ee21c07a09db9a1b039c1e5a3793b017c955b32c339216d5a\" returns successfully" Sep 9 06:58:33.670326 systemd[1]: Started cri-containerd-3b036b569128f8cd5e4ef0b022311a0ffa773f5bb2eef651bb70f26f34109174.scope - libcontainer container 3b036b569128f8cd5e4ef0b022311a0ffa773f5bb2eef651bb70f26f34109174. Sep 9 06:58:33.757709 containerd[1614]: time="2025-09-09T06:58:33.757640728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-ftkjz,Uid:969c5710-7f88-434d-a350-9fd2c3303ef9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b036b569128f8cd5e4ef0b022311a0ffa773f5bb2eef651bb70f26f34109174\"" Sep 9 06:58:33.762286 containerd[1614]: time="2025-09-09T06:58:33.762249355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 06:58:34.211850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925449614.mount: Deactivated successfully. Sep 9 06:58:34.610970 kubelet[2900]: I0909 06:58:34.610370 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xxlm6" podStartSLOduration=2.610343736 podStartE2EDuration="2.610343736s" podCreationTimestamp="2025-09-09 06:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 06:58:34.609900872 +0000 UTC m=+8.407077470" watchObservedRunningTime="2025-09-09 06:58:34.610343736 +0000 UTC m=+8.407520313" Sep 9 06:58:35.208217 sshd[2995]: Received disconnect from 117.220.10.3 port 43041:11: Bye Bye [preauth] Sep 9 06:58:35.208217 sshd[2995]: Disconnected from authenticating user root 117.220.10.3 port 43041 [preauth] Sep 9 06:58:35.211795 systemd[1]: sshd@10-10.230.42.222:22-117.220.10.3:43041.service: Deactivated successfully. Sep 9 06:58:35.898560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166025757.mount: Deactivated successfully. Sep 9 06:58:37.048381 containerd[1614]: time="2025-09-09T06:58:37.048288661Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:37.050190 containerd[1614]: time="2025-09-09T06:58:37.049433222Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 06:58:37.051124 containerd[1614]: time="2025-09-09T06:58:37.051090847Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:37.058275 containerd[1614]: time="2025-09-09T06:58:37.058211802Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:37.059546 containerd[1614]: time="2025-09-09T06:58:37.059510971Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.297210331s" Sep 9 06:58:37.060400 containerd[1614]: time="2025-09-09T06:58:37.060357296Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 06:58:37.065086 containerd[1614]: time="2025-09-09T06:58:37.065048097Z" level=info msg="CreateContainer within sandbox \"3b036b569128f8cd5e4ef0b022311a0ffa773f5bb2eef651bb70f26f34109174\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 06:58:37.077060 containerd[1614]: time="2025-09-09T06:58:37.076787365Z" level=info msg="Container da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:37.083360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119097065.mount: Deactivated successfully. Sep 9 06:58:37.086001 containerd[1614]: time="2025-09-09T06:58:37.085922619Z" level=info msg="CreateContainer within sandbox \"3b036b569128f8cd5e4ef0b022311a0ffa773f5bb2eef651bb70f26f34109174\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3\"" Sep 9 06:58:37.088346 containerd[1614]: time="2025-09-09T06:58:37.087331625Z" level=info msg="StartContainer for \"da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3\"" Sep 9 06:58:37.091742 containerd[1614]: time="2025-09-09T06:58:37.091694932Z" level=info msg="connecting to shim da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3" address="unix:///run/containerd/s/c06f29f548369e3f2630f214aaf9ac4b820e6d627fcd90a0b7c93308cdb38cda" protocol=ttrpc version=3 Sep 9 06:58:37.128021 systemd[1]: Started cri-containerd-da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3.scope - libcontainer container da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3. Sep 9 06:58:37.180818 containerd[1614]: time="2025-09-09T06:58:37.180694003Z" level=info msg="StartContainer for \"da7d4e155884ebbd3a507054ec89dda17236970dd586b9403984d128cc22ddb3\" returns successfully" Sep 9 06:58:37.621691 kubelet[2900]: I0909 06:58:37.621500 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-ftkjz" podStartSLOduration=1.320178975 podStartE2EDuration="4.621365165s" podCreationTimestamp="2025-09-09 06:58:33 +0000 UTC" firstStartedPulling="2025-09-09 06:58:33.761328441 +0000 UTC m=+7.558505011" lastFinishedPulling="2025-09-09 06:58:37.062514627 +0000 UTC m=+10.859691201" observedRunningTime="2025-09-09 06:58:37.620465362 +0000 UTC m=+11.417641951" watchObservedRunningTime="2025-09-09 06:58:37.621365165 +0000 UTC m=+11.418541744" Sep 9 06:58:44.720903 sudo[1915]: pam_unix(sudo:session): session closed for user root Sep 9 06:58:44.870068 sshd[1914]: Connection closed by 139.178.68.195 port 37444 Sep 9 06:58:44.870655 sshd-session[1905]: pam_unix(sshd:session): session closed for user core Sep 9 06:58:44.880789 systemd[1]: sshd@8-10.230.42.222:22-139.178.68.195:37444.service: Deactivated successfully. Sep 9 06:58:44.886589 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 06:58:44.887323 systemd[1]: session-11.scope: Consumed 6.944s CPU time, 157.7M memory peak. Sep 9 06:58:44.891214 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Sep 9 06:58:44.897041 systemd-logind[1558]: Removed session 11. Sep 9 06:58:48.855867 systemd[1]: Created slice kubepods-besteffort-pod50819eb6_a6f7_4134_b87b_d7169e3ae55a.slice - libcontainer container kubepods-besteffort-pod50819eb6_a6f7_4134_b87b_d7169e3ae55a.slice. Sep 9 06:58:48.884382 kubelet[2900]: I0909 06:58:48.883252 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2j69\" (UniqueName: \"kubernetes.io/projected/50819eb6-a6f7-4134-b87b-d7169e3ae55a-kube-api-access-m2j69\") pod \"calico-typha-79d6ccbc5-gvfrx\" (UID: \"50819eb6-a6f7-4134-b87b-d7169e3ae55a\") " pod="calico-system/calico-typha-79d6ccbc5-gvfrx" Sep 9 06:58:48.884382 kubelet[2900]: I0909 06:58:48.883336 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50819eb6-a6f7-4134-b87b-d7169e3ae55a-tigera-ca-bundle\") pod \"calico-typha-79d6ccbc5-gvfrx\" (UID: \"50819eb6-a6f7-4134-b87b-d7169e3ae55a\") " pod="calico-system/calico-typha-79d6ccbc5-gvfrx" Sep 9 06:58:48.884382 kubelet[2900]: I0909 06:58:48.883370 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50819eb6-a6f7-4134-b87b-d7169e3ae55a-typha-certs\") pod \"calico-typha-79d6ccbc5-gvfrx\" (UID: \"50819eb6-a6f7-4134-b87b-d7169e3ae55a\") " pod="calico-system/calico-typha-79d6ccbc5-gvfrx" Sep 9 06:58:49.167084 containerd[1614]: time="2025-09-09T06:58:49.166311076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79d6ccbc5-gvfrx,Uid:50819eb6-a6f7-4134-b87b-d7169e3ae55a,Namespace:calico-system,Attempt:0,}" Sep 9 06:58:49.216682 containerd[1614]: time="2025-09-09T06:58:49.216612097Z" level=info msg="connecting to shim 813c66037c4e1533fbece2f2f3623cede6f68edeac1f3c7b227738e464dcabf1" address="unix:///run/containerd/s/24e5d148a0fd4e68305b45877704eb8ea87eb3117949c419f444dc607cd579bf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:49.295471 systemd[1]: Started cri-containerd-813c66037c4e1533fbece2f2f3623cede6f68edeac1f3c7b227738e464dcabf1.scope - libcontainer container 813c66037c4e1533fbece2f2f3623cede6f68edeac1f3c7b227738e464dcabf1. Sep 9 06:58:49.330203 systemd[1]: Created slice kubepods-besteffort-podba866363_ba9e_43d2_ad7e_b70697898a94.slice - libcontainer container kubepods-besteffort-podba866363_ba9e_43d2_ad7e_b70697898a94.slice. Sep 9 06:58:49.389588 kubelet[2900]: I0909 06:58:49.389501 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-cni-log-dir\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390540 kubelet[2900]: I0909 06:58:49.389606 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-policysync\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390540 kubelet[2900]: I0909 06:58:49.389638 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba866363-ba9e-43d2-ad7e-b70697898a94-tigera-ca-bundle\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390540 kubelet[2900]: I0909 06:58:49.389677 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-cni-bin-dir\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390540 kubelet[2900]: I0909 06:58:49.389705 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-var-run-calico\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390540 kubelet[2900]: I0909 06:58:49.389739 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwpr\" (UniqueName: \"kubernetes.io/projected/ba866363-ba9e-43d2-ad7e-b70697898a94-kube-api-access-bfwpr\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390777 kubelet[2900]: I0909 06:58:49.389783 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-cni-net-dir\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390777 kubelet[2900]: I0909 06:58:49.389817 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba866363-ba9e-43d2-ad7e-b70697898a94-node-certs\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390777 kubelet[2900]: I0909 06:58:49.389841 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-xtables-lock\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390777 kubelet[2900]: I0909 06:58:49.389870 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-flexvol-driver-host\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390777 kubelet[2900]: I0909 06:58:49.389916 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-lib-modules\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.390993 kubelet[2900]: I0909 06:58:49.389942 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba866363-ba9e-43d2-ad7e-b70697898a94-var-lib-calico\") pod \"calico-node-gkltq\" (UID: \"ba866363-ba9e-43d2-ad7e-b70697898a94\") " pod="calico-system/calico-node-gkltq" Sep 9 06:58:49.468012 containerd[1614]: time="2025-09-09T06:58:49.467784534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79d6ccbc5-gvfrx,Uid:50819eb6-a6f7-4134-b87b-d7169e3ae55a,Namespace:calico-system,Attempt:0,} returns sandbox id \"813c66037c4e1533fbece2f2f3623cede6f68edeac1f3c7b227738e464dcabf1\"" Sep 9 06:58:49.474576 containerd[1614]: time="2025-09-09T06:58:49.474482027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 06:58:49.528660 kubelet[2900]: E0909 06:58:49.527132 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.528660 kubelet[2900]: W0909 06:58:49.527274 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.528660 kubelet[2900]: E0909 06:58:49.528553 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.542362 kubelet[2900]: E0909 06:58:49.542253 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.542362 kubelet[2900]: W0909 06:58:49.542282 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.542362 kubelet[2900]: E0909 06:58:49.542314 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.569203 kubelet[2900]: E0909 06:58:49.569125 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:58:49.640369 containerd[1614]: time="2025-09-09T06:58:49.640304167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gkltq,Uid:ba866363-ba9e-43d2-ad7e-b70697898a94,Namespace:calico-system,Attempt:0,}" Sep 9 06:58:49.662600 kubelet[2900]: E0909 06:58:49.662549 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.662600 kubelet[2900]: W0909 06:58:49.662586 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.663496 kubelet[2900]: E0909 06:58:49.662616 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.663815 kubelet[2900]: E0909 06:58:49.663755 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.663815 kubelet[2900]: W0909 06:58:49.663790 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.663815 kubelet[2900]: E0909 06:58:49.663806 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.665557 kubelet[2900]: E0909 06:58:49.665531 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.665557 kubelet[2900]: W0909 06:58:49.665552 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.665929 kubelet[2900]: E0909 06:58:49.665568 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.672483 kubelet[2900]: E0909 06:58:49.672220 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.672483 kubelet[2900]: W0909 06:58:49.672249 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.672483 kubelet[2900]: E0909 06:58:49.672280 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.673771 kubelet[2900]: E0909 06:58:49.673672 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.673771 kubelet[2900]: W0909 06:58:49.673691 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.673771 kubelet[2900]: E0909 06:58:49.673712 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.674299 kubelet[2900]: E0909 06:58:49.674210 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.674299 kubelet[2900]: W0909 06:58:49.674227 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.674299 kubelet[2900]: E0909 06:58:49.674243 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.675212 kubelet[2900]: E0909 06:58:49.675114 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.675212 kubelet[2900]: W0909 06:58:49.675134 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.675212 kubelet[2900]: E0909 06:58:49.675150 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.676014 kubelet[2900]: E0909 06:58:49.675908 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.676014 kubelet[2900]: W0909 06:58:49.675927 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.676014 kubelet[2900]: E0909 06:58:49.675944 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.676944 kubelet[2900]: E0909 06:58:49.676746 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.676944 kubelet[2900]: W0909 06:58:49.676764 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.676944 kubelet[2900]: E0909 06:58:49.676780 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.677639 kubelet[2900]: E0909 06:58:49.677539 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.677639 kubelet[2900]: W0909 06:58:49.677567 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.677639 kubelet[2900]: E0909 06:58:49.677584 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.678449 kubelet[2900]: E0909 06:58:49.678344 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.678449 kubelet[2900]: W0909 06:58:49.678362 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.678449 kubelet[2900]: E0909 06:58:49.678381 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.680326 kubelet[2900]: E0909 06:58:49.679792 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.680326 kubelet[2900]: W0909 06:58:49.679812 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.680326 kubelet[2900]: E0909 06:58:49.679836 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.680326 kubelet[2900]: E0909 06:58:49.680137 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.680326 kubelet[2900]: W0909 06:58:49.680152 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.680326 kubelet[2900]: E0909 06:58:49.680169 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.682390 kubelet[2900]: E0909 06:58:49.680790 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.682390 kubelet[2900]: W0909 06:58:49.681302 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.682390 kubelet[2900]: E0909 06:58:49.681322 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.682390 kubelet[2900]: E0909 06:58:49.682262 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.682390 kubelet[2900]: W0909 06:58:49.682276 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.682390 kubelet[2900]: E0909 06:58:49.682291 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.683120 kubelet[2900]: E0909 06:58:49.682987 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.683120 kubelet[2900]: W0909 06:58:49.683006 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.683120 kubelet[2900]: E0909 06:58:49.683022 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.683622 kubelet[2900]: E0909 06:58:49.683534 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.683622 kubelet[2900]: W0909 06:58:49.683552 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.683622 kubelet[2900]: E0909 06:58:49.683568 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.684140 kubelet[2900]: E0909 06:58:49.684014 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.684140 kubelet[2900]: W0909 06:58:49.684031 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.684140 kubelet[2900]: E0909 06:58:49.684084 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.684656 kubelet[2900]: E0909 06:58:49.684566 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.684656 kubelet[2900]: W0909 06:58:49.684584 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.684656 kubelet[2900]: E0909 06:58:49.684599 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.685123 kubelet[2900]: E0909 06:58:49.684963 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.685123 kubelet[2900]: W0909 06:58:49.684981 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.685123 kubelet[2900]: E0909 06:58:49.684995 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.703672 kubelet[2900]: E0909 06:58:49.702110 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.703672 kubelet[2900]: W0909 06:58:49.702148 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.703672 kubelet[2900]: E0909 06:58:49.702174 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.703672 kubelet[2900]: I0909 06:58:49.702230 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f91ddf46-adf9-4330-b27d-18c1e9855030-registration-dir\") pod \"csi-node-driver-5qmtg\" (UID: \"f91ddf46-adf9-4330-b27d-18c1e9855030\") " pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:58:49.703672 kubelet[2900]: E0909 06:58:49.702534 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.703672 kubelet[2900]: W0909 06:58:49.702549 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.703672 kubelet[2900]: E0909 06:58:49.702588 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.703672 kubelet[2900]: E0909 06:58:49.703416 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.703672 kubelet[2900]: W0909 06:58:49.703435 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.704596 kubelet[2900]: E0909 06:58:49.703454 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.704596 kubelet[2900]: I0909 06:58:49.702618 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f91ddf46-adf9-4330-b27d-18c1e9855030-socket-dir\") pod \"csi-node-driver-5qmtg\" (UID: \"f91ddf46-adf9-4330-b27d-18c1e9855030\") " pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:58:49.704596 kubelet[2900]: E0909 06:58:49.704186 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.704596 kubelet[2900]: W0909 06:58:49.704200 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.704596 kubelet[2900]: E0909 06:58:49.704215 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.706787 kubelet[2900]: E0909 06:58:49.705437 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.706787 kubelet[2900]: W0909 06:58:49.705456 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.707817 kubelet[2900]: E0909 06:58:49.706947 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.707817 kubelet[2900]: E0909 06:58:49.707270 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.707817 kubelet[2900]: W0909 06:58:49.707288 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.708242 kubelet[2900]: E0909 06:58:49.708078 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.708432 kubelet[2900]: E0909 06:58:49.708412 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.708528 kubelet[2900]: W0909 06:58:49.708508 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.708626 kubelet[2900]: E0909 06:58:49.708606 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.708752 kubelet[2900]: I0909 06:58:49.708730 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f91ddf46-adf9-4330-b27d-18c1e9855030-varrun\") pod \"csi-node-driver-5qmtg\" (UID: \"f91ddf46-adf9-4330-b27d-18c1e9855030\") " pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:58:49.711294 kubelet[2900]: E0909 06:58:49.710863 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.711294 kubelet[2900]: W0909 06:58:49.710882 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.711294 kubelet[2900]: E0909 06:58:49.710909 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.712018 kubelet[2900]: E0909 06:58:49.711999 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.712366 kubelet[2900]: W0909 06:58:49.712160 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.713096 kubelet[2900]: E0909 06:58:49.712468 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.713714 kubelet[2900]: E0909 06:58:49.713582 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.713714 kubelet[2900]: W0909 06:58:49.713619 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.713714 kubelet[2900]: E0909 06:58:49.713637 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.715665 kubelet[2900]: I0909 06:58:49.714132 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f91ddf46-adf9-4330-b27d-18c1e9855030-kubelet-dir\") pod \"csi-node-driver-5qmtg\" (UID: \"f91ddf46-adf9-4330-b27d-18c1e9855030\") " pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:58:49.716349 kubelet[2900]: E0909 06:58:49.716296 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.716671 kubelet[2900]: W0909 06:58:49.716316 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.716671 kubelet[2900]: E0909 06:58:49.716551 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.717475 kubelet[2900]: E0909 06:58:49.717350 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.717475 kubelet[2900]: W0909 06:58:49.717368 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.718461 kubelet[2900]: E0909 06:58:49.718156 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.720293 kubelet[2900]: E0909 06:58:49.720232 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.720525 kubelet[2900]: W0909 06:58:49.720438 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.720525 kubelet[2900]: E0909 06:58:49.720463 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.721144 kubelet[2900]: I0909 06:58:49.720818 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkgs\" (UniqueName: \"kubernetes.io/projected/f91ddf46-adf9-4330-b27d-18c1e9855030-kube-api-access-rdkgs\") pod \"csi-node-driver-5qmtg\" (UID: \"f91ddf46-adf9-4330-b27d-18c1e9855030\") " pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:58:49.721635 kubelet[2900]: E0909 06:58:49.721542 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.721635 kubelet[2900]: W0909 06:58:49.721561 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.721635 kubelet[2900]: E0909 06:58:49.721577 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.722573 kubelet[2900]: E0909 06:58:49.722335 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.722573 kubelet[2900]: W0909 06:58:49.722354 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.722573 kubelet[2900]: E0909 06:58:49.722369 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.743168 containerd[1614]: time="2025-09-09T06:58:49.742798381Z" level=info msg="connecting to shim 4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa" address="unix:///run/containerd/s/1030457dbb7502155a20f17a43746442eee8af4be28c9e56cfcbfaba11031936" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:58:49.825578 kubelet[2900]: E0909 06:58:49.825222 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.825578 kubelet[2900]: W0909 06:58:49.825268 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.825578 kubelet[2900]: E0909 06:58:49.825306 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.826604 kubelet[2900]: E0909 06:58:49.825816 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.826604 kubelet[2900]: W0909 06:58:49.825831 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.826604 kubelet[2900]: E0909 06:58:49.826099 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.827372 kubelet[2900]: E0909 06:58:49.827145 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.827372 kubelet[2900]: W0909 06:58:49.827165 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.827372 kubelet[2900]: E0909 06:58:49.827199 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.830100 kubelet[2900]: E0909 06:58:49.829359 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.830100 kubelet[2900]: W0909 06:58:49.829393 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.830100 kubelet[2900]: E0909 06:58:49.829479 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.830897 kubelet[2900]: E0909 06:58:49.830730 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.830897 kubelet[2900]: W0909 06:58:49.830752 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.830897 kubelet[2900]: E0909 06:58:49.830790 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.832945 kubelet[2900]: E0909 06:58:49.832151 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.832945 kubelet[2900]: W0909 06:58:49.832171 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.832945 kubelet[2900]: E0909 06:58:49.832837 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.832945 kubelet[2900]: W0909 06:58:49.832851 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.834001 kubelet[2900]: E0909 06:58:49.833124 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.834001 kubelet[2900]: W0909 06:58:49.833137 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.834484 kubelet[2900]: E0909 06:58:49.834151 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.834484 kubelet[2900]: E0909 06:58:49.834182 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.834484 kubelet[2900]: E0909 06:58:49.834202 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.834484 kubelet[2900]: E0909 06:58:49.834398 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.834484 kubelet[2900]: W0909 06:58:49.834413 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.834868 kubelet[2900]: E0909 06:58:49.834763 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.834868 kubelet[2900]: E0909 06:58:49.834851 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.834868 kubelet[2900]: W0909 06:58:49.834865 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.835111 kubelet[2900]: E0909 06:58:49.835054 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.835680 kubelet[2900]: E0909 06:58:49.835383 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.835680 kubelet[2900]: W0909 06:58:49.835402 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.835680 kubelet[2900]: E0909 06:58:49.835428 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.837804 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.843722 kubelet[2900]: W0909 06:58:49.837827 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.837846 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.839281 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.843722 kubelet[2900]: W0909 06:58:49.839295 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.839311 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.839661 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.843722 kubelet[2900]: W0909 06:58:49.839674 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.839688 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.843722 kubelet[2900]: E0909 06:58:49.841581 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.845323 kubelet[2900]: W0909 06:58:49.841597 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.845323 kubelet[2900]: E0909 06:58:49.841618 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.845323 kubelet[2900]: E0909 06:58:49.841844 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.845323 kubelet[2900]: W0909 06:58:49.841857 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.845323 kubelet[2900]: E0909 06:58:49.841885 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.845323 kubelet[2900]: E0909 06:58:49.842311 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.845323 kubelet[2900]: W0909 06:58:49.842325 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.845323 kubelet[2900]: E0909 06:58:49.842352 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.845323 kubelet[2900]: E0909 06:58:49.843489 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.845323 kubelet[2900]: W0909 06:58:49.843503 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.843518 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.844429 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.848788 kubelet[2900]: W0909 06:58:49.844444 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.844460 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.844835 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.848788 kubelet[2900]: W0909 06:58:49.844850 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.844864 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.845145 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.848788 kubelet[2900]: W0909 06:58:49.845159 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.848788 kubelet[2900]: E0909 06:58:49.845173 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.845819 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.851472 kubelet[2900]: W0909 06:58:49.845834 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.846433 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.851472 kubelet[2900]: W0909 06:58:49.846448 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.846463 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.847619 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.851472 kubelet[2900]: W0909 06:58:49.847636 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.847651 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.846291 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.851472 kubelet[2900]: E0909 06:58:49.848326 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.851959 kubelet[2900]: W0909 06:58:49.848341 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.851959 kubelet[2900]: E0909 06:58:49.848356 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:49.856413 systemd[1]: Started cri-containerd-4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa.scope - libcontainer container 4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa. Sep 9 06:58:49.879080 kubelet[2900]: E0909 06:58:49.878914 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:49.879080 kubelet[2900]: W0909 06:58:49.878978 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:49.879080 kubelet[2900]: E0909 06:58:49.879007 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:50.017815 containerd[1614]: time="2025-09-09T06:58:50.017486793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gkltq,Uid:ba866363-ba9e-43d2-ad7e-b70697898a94,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\"" Sep 9 06:58:51.475569 kubelet[2900]: E0909 06:58:51.475482 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:58:51.540229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598007336.mount: Deactivated successfully. Sep 9 06:58:53.475862 kubelet[2900]: E0909 06:58:53.475648 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:58:53.814415 containerd[1614]: time="2025-09-09T06:58:53.814247303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:53.816236 containerd[1614]: time="2025-09-09T06:58:53.816053441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 06:58:53.817260 containerd[1614]: time="2025-09-09T06:58:53.817227151Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:53.820384 containerd[1614]: time="2025-09-09T06:58:53.820317677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:53.821304 containerd[1614]: time="2025-09-09T06:58:53.821121145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.346533064s" Sep 9 06:58:53.821304 containerd[1614]: time="2025-09-09T06:58:53.821166566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 06:58:53.822854 containerd[1614]: time="2025-09-09T06:58:53.822825638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 06:58:53.846403 containerd[1614]: time="2025-09-09T06:58:53.846344808Z" level=info msg="CreateContainer within sandbox \"813c66037c4e1533fbece2f2f3623cede6f68edeac1f3c7b227738e464dcabf1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 06:58:53.857754 containerd[1614]: time="2025-09-09T06:58:53.857715678Z" level=info msg="Container 0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:53.863806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428646015.mount: Deactivated successfully. Sep 9 06:58:53.869574 containerd[1614]: time="2025-09-09T06:58:53.869524647Z" level=info msg="CreateContainer within sandbox \"813c66037c4e1533fbece2f2f3623cede6f68edeac1f3c7b227738e464dcabf1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933\"" Sep 9 06:58:53.870718 containerd[1614]: time="2025-09-09T06:58:53.870681073Z" level=info msg="StartContainer for \"0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933\"" Sep 9 06:58:53.873253 containerd[1614]: time="2025-09-09T06:58:53.873154236Z" level=info msg="connecting to shim 0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933" address="unix:///run/containerd/s/24e5d148a0fd4e68305b45877704eb8ea87eb3117949c419f444dc607cd579bf" protocol=ttrpc version=3 Sep 9 06:58:53.912286 systemd[1]: Started cri-containerd-0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933.scope - libcontainer container 0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933. Sep 9 06:58:54.001417 containerd[1614]: time="2025-09-09T06:58:54.001339673Z" level=info msg="StartContainer for \"0ee9a08d85e2407c09961f48b97f4a1ef91de8efec8acafa5e592aa4d62da933\" returns successfully" Sep 9 06:58:54.496233 kubelet[2900]: E0909 06:58:54.496147 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:58:54.712080 kubelet[2900]: I0909 06:58:54.711832 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79d6ccbc5-gvfrx" podStartSLOduration=2.361292093 podStartE2EDuration="6.711561555s" podCreationTimestamp="2025-09-09 06:58:48 +0000 UTC" firstStartedPulling="2025-09-09 06:58:49.472433414 +0000 UTC m=+23.269609983" lastFinishedPulling="2025-09-09 06:58:53.822702878 +0000 UTC m=+27.619879445" observedRunningTime="2025-09-09 06:58:54.707436094 +0000 UTC m=+28.504612701" watchObservedRunningTime="2025-09-09 06:58:54.711561555 +0000 UTC m=+28.508738129" Sep 9 06:58:54.721561 kubelet[2900]: E0909 06:58:54.721515 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.721762 kubelet[2900]: W0909 06:58:54.721591 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.721762 kubelet[2900]: E0909 06:58:54.721625 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.722212 kubelet[2900]: E0909 06:58:54.722190 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.722212 kubelet[2900]: W0909 06:58:54.722210 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.722837 kubelet[2900]: E0909 06:58:54.722226 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.722837 kubelet[2900]: E0909 06:58:54.722499 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.722837 kubelet[2900]: W0909 06:58:54.722514 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.722837 kubelet[2900]: E0909 06:58:54.722529 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.723241 kubelet[2900]: E0909 06:58:54.723219 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.723241 kubelet[2900]: W0909 06:58:54.723240 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.723365 kubelet[2900]: E0909 06:58:54.723258 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.723552 kubelet[2900]: E0909 06:58:54.723533 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.723552 kubelet[2900]: W0909 06:58:54.723551 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.723665 kubelet[2900]: E0909 06:58:54.723578 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.723842 kubelet[2900]: E0909 06:58:54.723813 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.723842 kubelet[2900]: W0909 06:58:54.723825 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.723842 kubelet[2900]: E0909 06:58:54.723838 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.724155 kubelet[2900]: E0909 06:58:54.724135 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.724155 kubelet[2900]: W0909 06:58:54.724154 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.724354 kubelet[2900]: E0909 06:58:54.724168 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.724416 kubelet[2900]: E0909 06:58:54.724389 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.724416 kubelet[2900]: W0909 06:58:54.724401 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.724416 kubelet[2900]: E0909 06:58:54.724415 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.724884 kubelet[2900]: E0909 06:58:54.724638 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.724884 kubelet[2900]: W0909 06:58:54.724651 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.724884 kubelet[2900]: E0909 06:58:54.724664 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.725024 kubelet[2900]: E0909 06:58:54.724899 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.725024 kubelet[2900]: W0909 06:58:54.724912 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.725024 kubelet[2900]: E0909 06:58:54.724925 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.725331 kubelet[2900]: E0909 06:58:54.725312 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.725408 kubelet[2900]: W0909 06:58:54.725333 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.725408 kubelet[2900]: E0909 06:58:54.725350 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.725588 kubelet[2900]: E0909 06:58:54.725562 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.725588 kubelet[2900]: W0909 06:58:54.725574 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.725588 kubelet[2900]: E0909 06:58:54.725587 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.725830 kubelet[2900]: E0909 06:58:54.725812 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.725889 kubelet[2900]: W0909 06:58:54.725832 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.725889 kubelet[2900]: E0909 06:58:54.725847 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.726128 kubelet[2900]: E0909 06:58:54.726094 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.726128 kubelet[2900]: W0909 06:58:54.726106 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.726128 kubelet[2900]: E0909 06:58:54.726120 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.726364 kubelet[2900]: E0909 06:58:54.726345 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.726423 kubelet[2900]: W0909 06:58:54.726363 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.726423 kubelet[2900]: E0909 06:58:54.726381 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.776527 kubelet[2900]: E0909 06:58:54.776174 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.776527 kubelet[2900]: W0909 06:58:54.776214 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.776527 kubelet[2900]: E0909 06:58:54.776247 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.777098 kubelet[2900]: E0909 06:58:54.776855 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.777098 kubelet[2900]: W0909 06:58:54.776874 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.777098 kubelet[2900]: E0909 06:58:54.776900 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.777566 kubelet[2900]: E0909 06:58:54.777396 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.777566 kubelet[2900]: W0909 06:58:54.777416 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.777566 kubelet[2900]: E0909 06:58:54.777431 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.777939 kubelet[2900]: E0909 06:58:54.777841 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.777939 kubelet[2900]: W0909 06:58:54.777877 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.777939 kubelet[2900]: E0909 06:58:54.777892 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.778260 kubelet[2900]: E0909 06:58:54.778191 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.778260 kubelet[2900]: W0909 06:58:54.778218 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.778260 kubelet[2900]: E0909 06:58:54.778243 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.778720 kubelet[2900]: E0909 06:58:54.778625 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.778720 kubelet[2900]: W0909 06:58:54.778638 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.778720 kubelet[2900]: E0909 06:58:54.778672 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.778933 kubelet[2900]: E0909 06:58:54.778915 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.778933 kubelet[2900]: W0909 06:58:54.778932 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.779114 kubelet[2900]: E0909 06:58:54.778965 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.779416 kubelet[2900]: E0909 06:58:54.779392 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.779416 kubelet[2900]: W0909 06:58:54.779410 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.779601 kubelet[2900]: E0909 06:58:54.779504 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.780466 kubelet[2900]: E0909 06:58:54.780406 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.780666 kubelet[2900]: W0909 06:58:54.780562 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.780865 kubelet[2900]: E0909 06:58:54.780715 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.781187 kubelet[2900]: E0909 06:58:54.781166 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.781455 kubelet[2900]: W0909 06:58:54.781277 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.781455 kubelet[2900]: E0909 06:58:54.781310 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.781665 kubelet[2900]: E0909 06:58:54.781648 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.781813 kubelet[2900]: W0909 06:58:54.781747 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.781813 kubelet[2900]: E0909 06:58:54.781786 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.782094 kubelet[2900]: E0909 06:58:54.782071 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.782094 kubelet[2900]: W0909 06:58:54.782093 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.782230 kubelet[2900]: E0909 06:58:54.782117 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.782380 kubelet[2900]: E0909 06:58:54.782361 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.782380 kubelet[2900]: W0909 06:58:54.782379 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.782667 kubelet[2900]: E0909 06:58:54.782402 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.782667 kubelet[2900]: E0909 06:58:54.782632 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.782667 kubelet[2900]: W0909 06:58:54.782645 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.782667 kubelet[2900]: E0909 06:58:54.782665 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.783266 kubelet[2900]: E0909 06:58:54.783177 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.783266 kubelet[2900]: W0909 06:58:54.783200 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.783266 kubelet[2900]: E0909 06:58:54.783228 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.783478 kubelet[2900]: E0909 06:58:54.783458 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.783478 kubelet[2900]: W0909 06:58:54.783477 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.783628 kubelet[2900]: E0909 06:58:54.783499 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.783762 kubelet[2900]: E0909 06:58:54.783744 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.783825 kubelet[2900]: W0909 06:58:54.783762 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.783825 kubelet[2900]: E0909 06:58:54.783778 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:54.784249 kubelet[2900]: E0909 06:58:54.784229 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 06:58:54.784249 kubelet[2900]: W0909 06:58:54.784247 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 06:58:54.784348 kubelet[2900]: E0909 06:58:54.784262 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 06:58:55.451606 containerd[1614]: time="2025-09-09T06:58:55.451458790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:55.453419 containerd[1614]: time="2025-09-09T06:58:55.453360414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 06:58:55.453970 containerd[1614]: time="2025-09-09T06:58:55.453428275Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:55.479836 containerd[1614]: time="2025-09-09T06:58:55.479753806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:58:55.481091 containerd[1614]: time="2025-09-09T06:58:55.480674103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.657683672s" Sep 9 06:58:55.481091 containerd[1614]: time="2025-09-09T06:58:55.480734039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 06:58:55.486348 containerd[1614]: time="2025-09-09T06:58:55.486291966Z" level=info msg="CreateContainer within sandbox \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 06:58:55.495572 containerd[1614]: time="2025-09-09T06:58:55.495527208Z" level=info msg="Container 96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:58:55.515883 containerd[1614]: time="2025-09-09T06:58:55.515786963Z" level=info msg="CreateContainer within sandbox \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\"" Sep 9 06:58:55.517322 containerd[1614]: time="2025-09-09T06:58:55.517194892Z" level=info msg="StartContainer for \"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\"" Sep 9 06:58:55.522117 containerd[1614]: time="2025-09-09T06:58:55.521455883Z" level=info msg="connecting to shim 96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052" address="unix:///run/containerd/s/1030457dbb7502155a20f17a43746442eee8af4be28c9e56cfcbfaba11031936" protocol=ttrpc version=3 Sep 9 06:58:55.560516 systemd[1]: Started cri-containerd-96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052.scope - libcontainer container 96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052. Sep 9 06:58:55.655284 containerd[1614]: time="2025-09-09T06:58:55.655206902Z" level=info msg="StartContainer for \"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\" returns successfully" Sep 9 06:58:55.679987 systemd[1]: cri-containerd-96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052.scope: Deactivated successfully. Sep 9 06:58:55.710363 containerd[1614]: time="2025-09-09T06:58:55.709256534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\" id:\"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\" pid:3574 exited_at:{seconds:1757401135 nanos:682495934}" Sep 9 06:58:55.716838 kubelet[2900]: I0909 06:58:55.716716 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 06:58:55.723059 containerd[1614]: time="2025-09-09T06:58:55.722988080Z" level=info msg="received exit event container_id:\"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\" id:\"96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052\" pid:3574 exited_at:{seconds:1757401135 nanos:682495934}" Sep 9 06:58:55.778716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96f4d6f50ff4de67bdf443020a53b5f2a1b40903e809ff0a0148c1ac5b226052-rootfs.mount: Deactivated successfully. Sep 9 06:58:56.476526 kubelet[2900]: E0909 06:58:56.475312 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:58:56.724849 containerd[1614]: time="2025-09-09T06:58:56.724778524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 06:58:58.475844 kubelet[2900]: E0909 06:58:58.475199 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:59:00.475561 kubelet[2900]: E0909 06:59:00.475496 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:59:02.182065 containerd[1614]: time="2025-09-09T06:59:02.180370708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:02.184008 containerd[1614]: time="2025-09-09T06:59:02.183943956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 06:59:02.187114 containerd[1614]: time="2025-09-09T06:59:02.186951714Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:02.191891 containerd[1614]: time="2025-09-09T06:59:02.191813861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:02.208503 containerd[1614]: time="2025-09-09T06:59:02.207934695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.483056861s" Sep 9 06:59:02.208503 containerd[1614]: time="2025-09-09T06:59:02.207998615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 06:59:02.212166 containerd[1614]: time="2025-09-09T06:59:02.211638419Z" level=info msg="CreateContainer within sandbox \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 06:59:02.291223 containerd[1614]: time="2025-09-09T06:59:02.291136145Z" level=info msg="Container ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:02.299679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410766272.mount: Deactivated successfully. Sep 9 06:59:02.313786 containerd[1614]: time="2025-09-09T06:59:02.313722987Z" level=info msg="CreateContainer within sandbox \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\"" Sep 9 06:59:02.321177 containerd[1614]: time="2025-09-09T06:59:02.320230886Z" level=info msg="StartContainer for \"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\"" Sep 9 06:59:02.322673 containerd[1614]: time="2025-09-09T06:59:02.322572395Z" level=info msg="connecting to shim ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d" address="unix:///run/containerd/s/1030457dbb7502155a20f17a43746442eee8af4be28c9e56cfcbfaba11031936" protocol=ttrpc version=3 Sep 9 06:59:02.356398 systemd[1]: Started cri-containerd-ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d.scope - libcontainer container ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d. Sep 9 06:59:02.450231 containerd[1614]: time="2025-09-09T06:59:02.449578434Z" level=info msg="StartContainer for \"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\" returns successfully" Sep 9 06:59:02.476680 kubelet[2900]: E0909 06:59:02.476613 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:59:03.562947 systemd[1]: cri-containerd-ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d.scope: Deactivated successfully. Sep 9 06:59:03.563943 systemd[1]: cri-containerd-ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d.scope: Consumed 774ms CPU time, 165.1M memory peak, 7.1M read from disk, 171.3M written to disk. Sep 9 06:59:03.627346 containerd[1614]: time="2025-09-09T06:59:03.626144638Z" level=info msg="received exit event container_id:\"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\" id:\"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\" pid:3634 exited_at:{seconds:1757401143 nanos:599234610}" Sep 9 06:59:03.629192 containerd[1614]: time="2025-09-09T06:59:03.629150575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\" id:\"ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d\" pid:3634 exited_at:{seconds:1757401143 nanos:599234610}" Sep 9 06:59:03.640029 kubelet[2900]: I0909 06:59:03.639974 2900 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 06:59:03.713917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec0670e8f5d0fec21f448d9fc6b05d6789e69b2057f05f8e2686c06ea5a5ef7d-rootfs.mount: Deactivated successfully. Sep 9 06:59:03.754208 systemd[1]: Created slice kubepods-burstable-pod27857c03_7c50_4f43_a3c0_84aca71e183e.slice - libcontainer container kubepods-burstable-pod27857c03_7c50_4f43_a3c0_84aca71e183e.slice. Sep 9 06:59:03.776715 systemd[1]: Created slice kubepods-burstable-pod734ec5de_cedd_46dc_a628_24b6dafad37c.slice - libcontainer container kubepods-burstable-pod734ec5de_cedd_46dc_a628_24b6dafad37c.slice. Sep 9 06:59:03.799117 systemd[1]: Created slice kubepods-besteffort-podbe4f2c0b_d13d_4c9e_895f_07176896952f.slice - libcontainer container kubepods-besteffort-podbe4f2c0b_d13d_4c9e_895f_07176896952f.slice. Sep 9 06:59:03.809446 containerd[1614]: time="2025-09-09T06:59:03.807737799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 06:59:03.821253 systemd[1]: Created slice kubepods-besteffort-pod8997eb8f_a6dc_4e83_a56c_39f2ec7f3051.slice - libcontainer container kubepods-besteffort-pod8997eb8f_a6dc_4e83_a56c_39f2ec7f3051.slice. Sep 9 06:59:03.840594 kubelet[2900]: I0909 06:59:03.839530 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmszl\" (UniqueName: \"kubernetes.io/projected/be4f2c0b-d13d-4c9e-895f-07176896952f-kube-api-access-bmszl\") pod \"calico-kube-controllers-6947957bb4-v59sm\" (UID: \"be4f2c0b-d13d-4c9e-895f-07176896952f\") " pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" Sep 9 06:59:03.840594 kubelet[2900]: I0909 06:59:03.839603 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-ca-bundle\") pod \"whisker-57cf88d544-4z5pn\" (UID: \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\") " pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:03.840594 kubelet[2900]: I0909 06:59:03.839652 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734ec5de-cedd-46dc-a628-24b6dafad37c-config-volume\") pod \"coredns-668d6bf9bc-b7fvp\" (UID: \"734ec5de-cedd-46dc-a628-24b6dafad37c\") " pod="kube-system/coredns-668d6bf9bc-b7fvp" Sep 9 06:59:03.840594 kubelet[2900]: I0909 06:59:03.839689 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5lb\" (UniqueName: \"kubernetes.io/projected/734ec5de-cedd-46dc-a628-24b6dafad37c-kube-api-access-kj5lb\") pod \"coredns-668d6bf9bc-b7fvp\" (UID: \"734ec5de-cedd-46dc-a628-24b6dafad37c\") " pod="kube-system/coredns-668d6bf9bc-b7fvp" Sep 9 06:59:03.840594 kubelet[2900]: I0909 06:59:03.839727 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be4f2c0b-d13d-4c9e-895f-07176896952f-tigera-ca-bundle\") pod \"calico-kube-controllers-6947957bb4-v59sm\" (UID: \"be4f2c0b-d13d-4c9e-895f-07176896952f\") " pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" Sep 9 06:59:03.844431 kubelet[2900]: I0909 06:59:03.839766 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-backend-key-pair\") pod \"whisker-57cf88d544-4z5pn\" (UID: \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\") " pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:03.844431 kubelet[2900]: I0909 06:59:03.839795 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27857c03-7c50-4f43-a3c0-84aca71e183e-config-volume\") pod \"coredns-668d6bf9bc-57sfk\" (UID: \"27857c03-7c50-4f43-a3c0-84aca71e183e\") " pod="kube-system/coredns-668d6bf9bc-57sfk" Sep 9 06:59:03.844431 kubelet[2900]: I0909 06:59:03.839854 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgmt\" (UniqueName: \"kubernetes.io/projected/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-kube-api-access-5cgmt\") pod \"whisker-57cf88d544-4z5pn\" (UID: \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\") " pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:03.844431 kubelet[2900]: I0909 06:59:03.839888 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7llp\" (UniqueName: \"kubernetes.io/projected/27857c03-7c50-4f43-a3c0-84aca71e183e-kube-api-access-r7llp\") pod \"coredns-668d6bf9bc-57sfk\" (UID: \"27857c03-7c50-4f43-a3c0-84aca71e183e\") " pod="kube-system/coredns-668d6bf9bc-57sfk" Sep 9 06:59:03.845174 systemd[1]: Created slice kubepods-besteffort-podbcbf9709_7836_4b82_a534_49bb731fb071.slice - libcontainer container kubepods-besteffort-podbcbf9709_7836_4b82_a534_49bb731fb071.slice. Sep 9 06:59:03.856793 systemd[1]: Created slice kubepods-besteffort-podd119e097_5ad4_4aa5_af1d_a78edabdf1b7.slice - libcontainer container kubepods-besteffort-podd119e097_5ad4_4aa5_af1d_a78edabdf1b7.slice. Sep 9 06:59:03.868366 systemd[1]: Created slice kubepods-besteffort-podc0424c92_a0a6_4bbb_be3b_6682abebc6da.slice - libcontainer container kubepods-besteffort-podc0424c92_a0a6_4bbb_be3b_6682abebc6da.slice. Sep 9 06:59:03.940587 kubelet[2900]: I0909 06:59:03.940515 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbf9709-7836-4b82-a534-49bb731fb071-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hp4tc\" (UID: \"bcbf9709-7836-4b82-a534-49bb731fb071\") " pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:03.949150 kubelet[2900]: I0909 06:59:03.946392 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8cx\" (UniqueName: \"kubernetes.io/projected/bcbf9709-7836-4b82-a534-49bb731fb071-kube-api-access-9j8cx\") pod \"goldmane-54d579b49d-hp4tc\" (UID: \"bcbf9709-7836-4b82-a534-49bb731fb071\") " pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:03.949150 kubelet[2900]: I0909 06:59:03.946549 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgd7n\" (UniqueName: \"kubernetes.io/projected/d119e097-5ad4-4aa5-af1d-a78edabdf1b7-kube-api-access-vgd7n\") pod \"calico-apiserver-8545c7b8c4-5vp24\" (UID: \"d119e097-5ad4-4aa5-af1d-a78edabdf1b7\") " pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" Sep 9 06:59:03.949150 kubelet[2900]: I0909 06:59:03.946649 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0424c92-a0a6-4bbb-be3b-6682abebc6da-calico-apiserver-certs\") pod \"calico-apiserver-8545c7b8c4-nrdkx\" (UID: \"c0424c92-a0a6-4bbb-be3b-6682abebc6da\") " pod="calico-apiserver/calico-apiserver-8545c7b8c4-nrdkx" Sep 9 06:59:03.949150 kubelet[2900]: I0909 06:59:03.946725 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz958\" (UniqueName: \"kubernetes.io/projected/c0424c92-a0a6-4bbb-be3b-6682abebc6da-kube-api-access-rz958\") pod \"calico-apiserver-8545c7b8c4-nrdkx\" (UID: \"c0424c92-a0a6-4bbb-be3b-6682abebc6da\") " pod="calico-apiserver/calico-apiserver-8545c7b8c4-nrdkx" Sep 9 06:59:03.949150 kubelet[2900]: I0909 06:59:03.946763 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bcbf9709-7836-4b82-a534-49bb731fb071-goldmane-key-pair\") pod \"goldmane-54d579b49d-hp4tc\" (UID: \"bcbf9709-7836-4b82-a534-49bb731fb071\") " pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:03.949996 kubelet[2900]: I0909 06:59:03.946902 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d119e097-5ad4-4aa5-af1d-a78edabdf1b7-calico-apiserver-certs\") pod \"calico-apiserver-8545c7b8c4-5vp24\" (UID: \"d119e097-5ad4-4aa5-af1d-a78edabdf1b7\") " pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" Sep 9 06:59:03.949996 kubelet[2900]: I0909 06:59:03.946964 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbf9709-7836-4b82-a534-49bb731fb071-config\") pod \"goldmane-54d579b49d-hp4tc\" (UID: \"bcbf9709-7836-4b82-a534-49bb731fb071\") " pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:04.079890 containerd[1614]: time="2025-09-09T06:59:04.079164473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57sfk,Uid:27857c03-7c50-4f43-a3c0-84aca71e183e,Namespace:kube-system,Attempt:0,}" Sep 9 06:59:04.089596 containerd[1614]: time="2025-09-09T06:59:04.089539864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b7fvp,Uid:734ec5de-cedd-46dc-a628-24b6dafad37c,Namespace:kube-system,Attempt:0,}" Sep 9 06:59:04.117078 containerd[1614]: time="2025-09-09T06:59:04.116628829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6947957bb4-v59sm,Uid:be4f2c0b-d13d-4c9e-895f-07176896952f,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:04.149533 containerd[1614]: time="2025-09-09T06:59:04.149465264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57cf88d544-4z5pn,Uid:8997eb8f-a6dc-4e83-a56c-39f2ec7f3051,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:04.159899 containerd[1614]: time="2025-09-09T06:59:04.159761563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hp4tc,Uid:bcbf9709-7836-4b82-a534-49bb731fb071,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:04.171628 containerd[1614]: time="2025-09-09T06:59:04.171370641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-5vp24,Uid:d119e097-5ad4-4aa5-af1d-a78edabdf1b7,Namespace:calico-apiserver,Attempt:0,}" Sep 9 06:59:04.177809 containerd[1614]: time="2025-09-09T06:59:04.177751012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-nrdkx,Uid:c0424c92-a0a6-4bbb-be3b-6682abebc6da,Namespace:calico-apiserver,Attempt:0,}" Sep 9 06:59:04.492548 systemd[1]: Created slice kubepods-besteffort-podf91ddf46_adf9_4330_b27d_18c1e9855030.slice - libcontainer container kubepods-besteffort-podf91ddf46_adf9_4330_b27d_18c1e9855030.slice. Sep 9 06:59:04.504462 containerd[1614]: time="2025-09-09T06:59:04.504381317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qmtg,Uid:f91ddf46-adf9-4330-b27d-18c1e9855030,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:04.535800 containerd[1614]: time="2025-09-09T06:59:04.535730089Z" level=error msg="Failed to destroy network for sandbox \"4eed44e15c900747b6f7d1c93fd155eb37f25c3b81fcc5939e5cdec1786d7b67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.541717 containerd[1614]: time="2025-09-09T06:59:04.541639074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b7fvp,Uid:734ec5de-cedd-46dc-a628-24b6dafad37c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eed44e15c900747b6f7d1c93fd155eb37f25c3b81fcc5939e5cdec1786d7b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.553471 containerd[1614]: time="2025-09-09T06:59:04.553414013Z" level=error msg="Failed to destroy network for sandbox \"27be2c038626c62c0ff24ba6870fb6ceda6db451e72221a5e3cd748084b363f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.554675 kubelet[2900]: E0909 06:59:04.554618 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eed44e15c900747b6f7d1c93fd155eb37f25c3b81fcc5939e5cdec1786d7b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.557611 kubelet[2900]: E0909 06:59:04.556070 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eed44e15c900747b6f7d1c93fd155eb37f25c3b81fcc5939e5cdec1786d7b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b7fvp" Sep 9 06:59:04.557611 kubelet[2900]: E0909 06:59:04.556158 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eed44e15c900747b6f7d1c93fd155eb37f25c3b81fcc5939e5cdec1786d7b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b7fvp" Sep 9 06:59:04.557611 kubelet[2900]: E0909 06:59:04.556261 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b7fvp_kube-system(734ec5de-cedd-46dc-a628-24b6dafad37c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b7fvp_kube-system(734ec5de-cedd-46dc-a628-24b6dafad37c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4eed44e15c900747b6f7d1c93fd155eb37f25c3b81fcc5939e5cdec1786d7b67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b7fvp" podUID="734ec5de-cedd-46dc-a628-24b6dafad37c" Sep 9 06:59:04.560227 containerd[1614]: time="2025-09-09T06:59:04.559926256Z" level=error msg="Failed to destroy network for sandbox \"21099c35208b043b242e2cb8a2e3d12843cfd4d31f2d2e5a45911dff8acb2623\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.563073 containerd[1614]: time="2025-09-09T06:59:04.562970648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-nrdkx,Uid:c0424c92-a0a6-4bbb-be3b-6682abebc6da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21099c35208b043b242e2cb8a2e3d12843cfd4d31f2d2e5a45911dff8acb2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.576117 containerd[1614]: time="2025-09-09T06:59:04.561583443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57sfk,Uid:27857c03-7c50-4f43-a3c0-84aca71e183e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27be2c038626c62c0ff24ba6870fb6ceda6db451e72221a5e3cd748084b363f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.576375 kubelet[2900]: E0909 06:59:04.575155 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27be2c038626c62c0ff24ba6870fb6ceda6db451e72221a5e3cd748084b363f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.576375 kubelet[2900]: E0909 06:59:04.575233 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27be2c038626c62c0ff24ba6870fb6ceda6db451e72221a5e3cd748084b363f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-57sfk" Sep 9 06:59:04.576375 kubelet[2900]: E0909 06:59:04.575265 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27be2c038626c62c0ff24ba6870fb6ceda6db451e72221a5e3cd748084b363f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-57sfk" Sep 9 06:59:04.576534 kubelet[2900]: E0909 06:59:04.575329 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-57sfk_kube-system(27857c03-7c50-4f43-a3c0-84aca71e183e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-57sfk_kube-system(27857c03-7c50-4f43-a3c0-84aca71e183e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27be2c038626c62c0ff24ba6870fb6ceda6db451e72221a5e3cd748084b363f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-57sfk" podUID="27857c03-7c50-4f43-a3c0-84aca71e183e" Sep 9 06:59:04.576534 kubelet[2900]: E0909 06:59:04.575397 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21099c35208b043b242e2cb8a2e3d12843cfd4d31f2d2e5a45911dff8acb2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.576534 kubelet[2900]: E0909 06:59:04.575439 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21099c35208b043b242e2cb8a2e3d12843cfd4d31f2d2e5a45911dff8acb2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8545c7b8c4-nrdkx" Sep 9 06:59:04.576718 kubelet[2900]: E0909 06:59:04.575465 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21099c35208b043b242e2cb8a2e3d12843cfd4d31f2d2e5a45911dff8acb2623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8545c7b8c4-nrdkx" Sep 9 06:59:04.576718 kubelet[2900]: E0909 06:59:04.575501 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8545c7b8c4-nrdkx_calico-apiserver(c0424c92-a0a6-4bbb-be3b-6682abebc6da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8545c7b8c4-nrdkx_calico-apiserver(c0424c92-a0a6-4bbb-be3b-6682abebc6da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21099c35208b043b242e2cb8a2e3d12843cfd4d31f2d2e5a45911dff8acb2623\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8545c7b8c4-nrdkx" podUID="c0424c92-a0a6-4bbb-be3b-6682abebc6da" Sep 9 06:59:04.597057 containerd[1614]: time="2025-09-09T06:59:04.596804515Z" level=error msg="Failed to destroy network for sandbox \"1f30c60673953d3f94eba1ece4179484b81d9df043abd28f12a903abab0c4ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.597057 containerd[1614]: time="2025-09-09T06:59:04.596917633Z" level=error msg="Failed to destroy network for sandbox \"2d3bda53472269f9b016232dc73eb52f27d7233a617d2e803658c742174fe516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.598889 containerd[1614]: time="2025-09-09T06:59:04.598788617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-5vp24,Uid:d119e097-5ad4-4aa5-af1d-a78edabdf1b7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f30c60673953d3f94eba1ece4179484b81d9df043abd28f12a903abab0c4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.599389 kubelet[2900]: E0909 06:59:04.599340 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f30c60673953d3f94eba1ece4179484b81d9df043abd28f12a903abab0c4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.599634 kubelet[2900]: E0909 06:59:04.599601 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f30c60673953d3f94eba1ece4179484b81d9df043abd28f12a903abab0c4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" Sep 9 06:59:04.599791 kubelet[2900]: E0909 06:59:04.599756 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f30c60673953d3f94eba1ece4179484b81d9df043abd28f12a903abab0c4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" Sep 9 06:59:04.600700 containerd[1614]: time="2025-09-09T06:59:04.600173995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6947957bb4-v59sm,Uid:be4f2c0b-d13d-4c9e-895f-07176896952f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bda53472269f9b016232dc73eb52f27d7233a617d2e803658c742174fe516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.601774 kubelet[2900]: E0909 06:59:04.600800 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8545c7b8c4-5vp24_calico-apiserver(d119e097-5ad4-4aa5-af1d-a78edabdf1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8545c7b8c4-5vp24_calico-apiserver(d119e097-5ad4-4aa5-af1d-a78edabdf1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f30c60673953d3f94eba1ece4179484b81d9df043abd28f12a903abab0c4ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" podUID="d119e097-5ad4-4aa5-af1d-a78edabdf1b7" Sep 9 06:59:04.601774 kubelet[2900]: E0909 06:59:04.600919 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bda53472269f9b016232dc73eb52f27d7233a617d2e803658c742174fe516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.602261 kubelet[2900]: E0909 06:59:04.600954 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bda53472269f9b016232dc73eb52f27d7233a617d2e803658c742174fe516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" Sep 9 06:59:04.602912 kubelet[2900]: E0909 06:59:04.602720 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d3bda53472269f9b016232dc73eb52f27d7233a617d2e803658c742174fe516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" Sep 9 06:59:04.603600 kubelet[2900]: E0909 06:59:04.602815 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6947957bb4-v59sm_calico-system(be4f2c0b-d13d-4c9e-895f-07176896952f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6947957bb4-v59sm_calico-system(be4f2c0b-d13d-4c9e-895f-07176896952f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d3bda53472269f9b016232dc73eb52f27d7233a617d2e803658c742174fe516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" podUID="be4f2c0b-d13d-4c9e-895f-07176896952f" Sep 9 06:59:04.603887 containerd[1614]: time="2025-09-09T06:59:04.603850539Z" level=error msg="Failed to destroy network for sandbox \"078bce3ce81f34aecbffd38eda74126e8ab74025738dc4b21845051b1e8abb91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.609545 containerd[1614]: time="2025-09-09T06:59:04.609441943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hp4tc,Uid:bcbf9709-7836-4b82-a534-49bb731fb071,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bce3ce81f34aecbffd38eda74126e8ab74025738dc4b21845051b1e8abb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.610144 kubelet[2900]: E0909 06:59:04.610080 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bce3ce81f34aecbffd38eda74126e8ab74025738dc4b21845051b1e8abb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.610216 kubelet[2900]: E0909 06:59:04.610162 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bce3ce81f34aecbffd38eda74126e8ab74025738dc4b21845051b1e8abb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:04.611004 kubelet[2900]: E0909 06:59:04.610198 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bce3ce81f34aecbffd38eda74126e8ab74025738dc4b21845051b1e8abb91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:04.611004 kubelet[2900]: E0909 06:59:04.610280 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hp4tc_calico-system(bcbf9709-7836-4b82-a534-49bb731fb071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hp4tc_calico-system(bcbf9709-7836-4b82-a534-49bb731fb071)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"078bce3ce81f34aecbffd38eda74126e8ab74025738dc4b21845051b1e8abb91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hp4tc" podUID="bcbf9709-7836-4b82-a534-49bb731fb071" Sep 9 06:59:04.612438 containerd[1614]: time="2025-09-09T06:59:04.611666976Z" level=error msg="Failed to destroy network for sandbox \"23b3bc688573485d20db14d443ad8f055249b77ca0017b2eaea02c1939a24def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.613534 containerd[1614]: time="2025-09-09T06:59:04.613471201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57cf88d544-4z5pn,Uid:8997eb8f-a6dc-4e83-a56c-39f2ec7f3051,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b3bc688573485d20db14d443ad8f055249b77ca0017b2eaea02c1939a24def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.613887 kubelet[2900]: E0909 06:59:04.613853 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b3bc688573485d20db14d443ad8f055249b77ca0017b2eaea02c1939a24def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.614253 kubelet[2900]: E0909 06:59:04.614023 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b3bc688573485d20db14d443ad8f055249b77ca0017b2eaea02c1939a24def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:04.614253 kubelet[2900]: E0909 06:59:04.614159 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b3bc688573485d20db14d443ad8f055249b77ca0017b2eaea02c1939a24def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:04.614838 kubelet[2900]: E0909 06:59:04.614584 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57cf88d544-4z5pn_calico-system(8997eb8f-a6dc-4e83-a56c-39f2ec7f3051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57cf88d544-4z5pn_calico-system(8997eb8f-a6dc-4e83-a56c-39f2ec7f3051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23b3bc688573485d20db14d443ad8f055249b77ca0017b2eaea02c1939a24def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57cf88d544-4z5pn" podUID="8997eb8f-a6dc-4e83-a56c-39f2ec7f3051" Sep 9 06:59:04.670143 containerd[1614]: time="2025-09-09T06:59:04.670067913Z" level=error msg="Failed to destroy network for sandbox \"36ede85664091d9056e52854755b541608d8c94b7dab60aabb6fa9a2cbe7a570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.675435 containerd[1614]: time="2025-09-09T06:59:04.675283616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qmtg,Uid:f91ddf46-adf9-4330-b27d-18c1e9855030,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ede85664091d9056e52854755b541608d8c94b7dab60aabb6fa9a2cbe7a570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.675873 kubelet[2900]: E0909 06:59:04.675764 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ede85664091d9056e52854755b541608d8c94b7dab60aabb6fa9a2cbe7a570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:04.676439 kubelet[2900]: E0909 06:59:04.675918 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ede85664091d9056e52854755b541608d8c94b7dab60aabb6fa9a2cbe7a570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:59:04.676439 kubelet[2900]: E0909 06:59:04.675971 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ede85664091d9056e52854755b541608d8c94b7dab60aabb6fa9a2cbe7a570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5qmtg" Sep 9 06:59:04.676439 kubelet[2900]: E0909 06:59:04.676082 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5qmtg_calico-system(f91ddf46-adf9-4330-b27d-18c1e9855030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5qmtg_calico-system(f91ddf46-adf9-4330-b27d-18c1e9855030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36ede85664091d9056e52854755b541608d8c94b7dab60aabb6fa9a2cbe7a570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5qmtg" podUID="f91ddf46-adf9-4330-b27d-18c1e9855030" Sep 9 06:59:15.507891 containerd[1614]: time="2025-09-09T06:59:15.503275230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-5vp24,Uid:d119e097-5ad4-4aa5-af1d-a78edabdf1b7,Namespace:calico-apiserver,Attempt:0,}" Sep 9 06:59:15.524083 containerd[1614]: time="2025-09-09T06:59:15.484881048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b7fvp,Uid:734ec5de-cedd-46dc-a628-24b6dafad37c,Namespace:kube-system,Attempt:0,}" Sep 9 06:59:15.784724 containerd[1614]: time="2025-09-09T06:59:15.783388309Z" level=error msg="Failed to destroy network for sandbox \"5a90e0f7fd247269723765bd85336fbbaed5260dff47abaa9ecafab14a0e42d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:15.787188 containerd[1614]: time="2025-09-09T06:59:15.787123738Z" level=error msg="Failed to destroy network for sandbox \"29dc290a4765594a0ababe9ee8640beb9065ef948bfd5948f2def1864c87c3ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:15.788227 containerd[1614]: time="2025-09-09T06:59:15.787145109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-5vp24,Uid:d119e097-5ad4-4aa5-af1d-a78edabdf1b7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a90e0f7fd247269723765bd85336fbbaed5260dff47abaa9ecafab14a0e42d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:15.788341 kubelet[2900]: E0909 06:59:15.788236 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a90e0f7fd247269723765bd85336fbbaed5260dff47abaa9ecafab14a0e42d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:15.790251 kubelet[2900]: E0909 06:59:15.788354 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a90e0f7fd247269723765bd85336fbbaed5260dff47abaa9ecafab14a0e42d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" Sep 9 06:59:15.790251 kubelet[2900]: E0909 06:59:15.788392 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a90e0f7fd247269723765bd85336fbbaed5260dff47abaa9ecafab14a0e42d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" Sep 9 06:59:15.790251 kubelet[2900]: E0909 06:59:15.788473 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8545c7b8c4-5vp24_calico-apiserver(d119e097-5ad4-4aa5-af1d-a78edabdf1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8545c7b8c4-5vp24_calico-apiserver(d119e097-5ad4-4aa5-af1d-a78edabdf1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a90e0f7fd247269723765bd85336fbbaed5260dff47abaa9ecafab14a0e42d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" podUID="d119e097-5ad4-4aa5-af1d-a78edabdf1b7" Sep 9 06:59:15.791808 systemd[1]: run-netns-cni\x2d3e9b63e2\x2d1202\x2d3d93\x2d949a\x2db11d408b01b9.mount: Deactivated successfully. Sep 9 06:59:15.802617 systemd[1]: run-netns-cni\x2d05331e6d\x2d011f\x2dc860\x2dd199\x2d223eeb7146cc.mount: Deactivated successfully. Sep 9 06:59:15.806302 containerd[1614]: time="2025-09-09T06:59:15.806233286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b7fvp,Uid:734ec5de-cedd-46dc-a628-24b6dafad37c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dc290a4765594a0ababe9ee8640beb9065ef948bfd5948f2def1864c87c3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:15.806928 kubelet[2900]: E0909 06:59:15.806859 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dc290a4765594a0ababe9ee8640beb9065ef948bfd5948f2def1864c87c3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:15.807293 kubelet[2900]: E0909 06:59:15.807175 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dc290a4765594a0ababe9ee8640beb9065ef948bfd5948f2def1864c87c3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b7fvp" Sep 9 06:59:15.808809 kubelet[2900]: E0909 06:59:15.807390 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dc290a4765594a0ababe9ee8640beb9065ef948bfd5948f2def1864c87c3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b7fvp" Sep 9 06:59:15.808809 kubelet[2900]: E0909 06:59:15.807512 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b7fvp_kube-system(734ec5de-cedd-46dc-a628-24b6dafad37c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b7fvp_kube-system(734ec5de-cedd-46dc-a628-24b6dafad37c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29dc290a4765594a0ababe9ee8640beb9065ef948bfd5948f2def1864c87c3ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b7fvp" podUID="734ec5de-cedd-46dc-a628-24b6dafad37c" Sep 9 06:59:15.905487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814281472.mount: Deactivated successfully. Sep 9 06:59:15.976007 containerd[1614]: time="2025-09-09T06:59:15.975901749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:16.004919 containerd[1614]: time="2025-09-09T06:59:15.984933383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 06:59:16.004919 containerd[1614]: time="2025-09-09T06:59:15.999130084Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:16.021251 containerd[1614]: time="2025-09-09T06:59:16.021146599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:16.029604 containerd[1614]: time="2025-09-09T06:59:16.029472343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 12.217015163s" Sep 9 06:59:16.032064 containerd[1614]: time="2025-09-09T06:59:16.029693686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 06:59:16.065803 containerd[1614]: time="2025-09-09T06:59:16.064845045Z" level=info msg="CreateContainer within sandbox \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 06:59:16.142651 containerd[1614]: time="2025-09-09T06:59:16.142508938Z" level=info msg="Container 07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:16.222063 containerd[1614]: time="2025-09-09T06:59:16.221975193Z" level=info msg="CreateContainer within sandbox \"4fd8c5faad1f8b113f4b242778e50c1eee0824e0ca7c33dbf06519b91ea4b2aa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\"" Sep 9 06:59:16.224206 containerd[1614]: time="2025-09-09T06:59:16.224013870Z" level=info msg="StartContainer for \"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\"" Sep 9 06:59:16.242436 containerd[1614]: time="2025-09-09T06:59:16.242300980Z" level=info msg="connecting to shim 07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7" address="unix:///run/containerd/s/1030457dbb7502155a20f17a43746442eee8af4be28c9e56cfcbfaba11031936" protocol=ttrpc version=3 Sep 9 06:59:16.358339 systemd[1]: Started cri-containerd-07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7.scope - libcontainer container 07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7. Sep 9 06:59:16.478803 containerd[1614]: time="2025-09-09T06:59:16.478449704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hp4tc,Uid:bcbf9709-7836-4b82-a534-49bb731fb071,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:16.479785 containerd[1614]: time="2025-09-09T06:59:16.479715910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57cf88d544-4z5pn,Uid:8997eb8f-a6dc-4e83-a56c-39f2ec7f3051,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:16.481713 containerd[1614]: time="2025-09-09T06:59:16.481033420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6947957bb4-v59sm,Uid:be4f2c0b-d13d-4c9e-895f-07176896952f,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:16.495354 containerd[1614]: time="2025-09-09T06:59:16.495143170Z" level=info msg="StartContainer for \"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" returns successfully" Sep 9 06:59:16.700705 containerd[1614]: time="2025-09-09T06:59:16.700382308Z" level=error msg="Failed to destroy network for sandbox \"8fc2254c7ebdcc83ec16450141afd9c7fa62a282885b2d4e41202b0daad4bec6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.707982 containerd[1614]: time="2025-09-09T06:59:16.706200313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hp4tc,Uid:bcbf9709-7836-4b82-a534-49bb731fb071,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc2254c7ebdcc83ec16450141afd9c7fa62a282885b2d4e41202b0daad4bec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.708313 kubelet[2900]: E0909 06:59:16.706618 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc2254c7ebdcc83ec16450141afd9c7fa62a282885b2d4e41202b0daad4bec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.708313 kubelet[2900]: E0909 06:59:16.706712 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc2254c7ebdcc83ec16450141afd9c7fa62a282885b2d4e41202b0daad4bec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:16.708313 kubelet[2900]: E0909 06:59:16.706746 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fc2254c7ebdcc83ec16450141afd9c7fa62a282885b2d4e41202b0daad4bec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hp4tc" Sep 9 06:59:16.705397 systemd[1]: run-netns-cni\x2d8c8806af\x2d0c6b\x2d37f1\x2d1c46\x2d146d5d4ffa97.mount: Deactivated successfully. Sep 9 06:59:16.710367 kubelet[2900]: E0909 06:59:16.706818 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hp4tc_calico-system(bcbf9709-7836-4b82-a534-49bb731fb071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hp4tc_calico-system(bcbf9709-7836-4b82-a534-49bb731fb071)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fc2254c7ebdcc83ec16450141afd9c7fa62a282885b2d4e41202b0daad4bec6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hp4tc" podUID="bcbf9709-7836-4b82-a534-49bb731fb071" Sep 9 06:59:16.725182 containerd[1614]: time="2025-09-09T06:59:16.722869542Z" level=error msg="Failed to destroy network for sandbox \"ac303f1c1214ddd2bfc87b36770bf1ce80d054d49034d393020d6c1eec9fc376\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.726402 containerd[1614]: time="2025-09-09T06:59:16.726350364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57cf88d544-4z5pn,Uid:8997eb8f-a6dc-4e83-a56c-39f2ec7f3051,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac303f1c1214ddd2bfc87b36770bf1ce80d054d49034d393020d6c1eec9fc376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.726976 systemd[1]: run-netns-cni\x2d6baf5fad\x2deb9c\x2d4d61\x2dd10f\x2dbf43ad5b814b.mount: Deactivated successfully. Sep 9 06:59:16.728254 kubelet[2900]: E0909 06:59:16.727434 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac303f1c1214ddd2bfc87b36770bf1ce80d054d49034d393020d6c1eec9fc376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.728254 kubelet[2900]: E0909 06:59:16.727510 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac303f1c1214ddd2bfc87b36770bf1ce80d054d49034d393020d6c1eec9fc376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:16.728254 kubelet[2900]: E0909 06:59:16.727541 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac303f1c1214ddd2bfc87b36770bf1ce80d054d49034d393020d6c1eec9fc376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57cf88d544-4z5pn" Sep 9 06:59:16.728415 kubelet[2900]: E0909 06:59:16.727619 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57cf88d544-4z5pn_calico-system(8997eb8f-a6dc-4e83-a56c-39f2ec7f3051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57cf88d544-4z5pn_calico-system(8997eb8f-a6dc-4e83-a56c-39f2ec7f3051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac303f1c1214ddd2bfc87b36770bf1ce80d054d49034d393020d6c1eec9fc376\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57cf88d544-4z5pn" podUID="8997eb8f-a6dc-4e83-a56c-39f2ec7f3051" Sep 9 06:59:16.760128 containerd[1614]: time="2025-09-09T06:59:16.759327740Z" level=error msg="Failed to destroy network for sandbox \"e941f397bd0b7e7fb245f79823501030f1c6823f1e2587c0f82e38a26c3cb2d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.763309 containerd[1614]: time="2025-09-09T06:59:16.763145989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6947957bb4-v59sm,Uid:be4f2c0b-d13d-4c9e-895f-07176896952f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e941f397bd0b7e7fb245f79823501030f1c6823f1e2587c0f82e38a26c3cb2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.764321 systemd[1]: run-netns-cni\x2dba04cec8\x2db692\x2d72a7\x2d9de4\x2d32f98de63d99.mount: Deactivated successfully. Sep 9 06:59:16.765444 kubelet[2900]: E0909 06:59:16.765348 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e941f397bd0b7e7fb245f79823501030f1c6823f1e2587c0f82e38a26c3cb2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 06:59:16.765824 kubelet[2900]: E0909 06:59:16.765472 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e941f397bd0b7e7fb245f79823501030f1c6823f1e2587c0f82e38a26c3cb2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" Sep 9 06:59:16.765824 kubelet[2900]: E0909 06:59:16.765706 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e941f397bd0b7e7fb245f79823501030f1c6823f1e2587c0f82e38a26c3cb2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" Sep 9 06:59:16.768486 kubelet[2900]: E0909 06:59:16.766265 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6947957bb4-v59sm_calico-system(be4f2c0b-d13d-4c9e-895f-07176896952f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6947957bb4-v59sm_calico-system(be4f2c0b-d13d-4c9e-895f-07176896952f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e941f397bd0b7e7fb245f79823501030f1c6823f1e2587c0f82e38a26c3cb2d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" podUID="be4f2c0b-d13d-4c9e-895f-07176896952f" Sep 9 06:59:16.889272 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 06:59:16.890338 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 06:59:16.927615 kubelet[2900]: I0909 06:59:16.926322 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gkltq" podStartSLOduration=1.922494794 podStartE2EDuration="27.92628127s" podCreationTimestamp="2025-09-09 06:58:49 +0000 UTC" firstStartedPulling="2025-09-09 06:58:50.027395296 +0000 UTC m=+23.824571857" lastFinishedPulling="2025-09-09 06:59:16.031181771 +0000 UTC m=+49.828358333" observedRunningTime="2025-09-09 06:59:16.924593097 +0000 UTC m=+50.721769670" watchObservedRunningTime="2025-09-09 06:59:16.92628127 +0000 UTC m=+50.723457853" Sep 9 06:59:17.357116 kubelet[2900]: I0909 06:59:17.356392 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-ca-bundle\") pod \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\" (UID: \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\") " Sep 9 06:59:17.358846 kubelet[2900]: I0909 06:59:17.357907 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cgmt\" (UniqueName: \"kubernetes.io/projected/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-kube-api-access-5cgmt\") pod \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\" (UID: \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\") " Sep 9 06:59:17.359809 kubelet[2900]: I0909 06:59:17.359780 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-backend-key-pair\") pod \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\" (UID: \"8997eb8f-a6dc-4e83-a56c-39f2ec7f3051\") " Sep 9 06:59:17.372662 systemd[1]: var-lib-kubelet-pods-8997eb8f\x2da6dc\x2d4e83\x2da56c\x2d39f2ec7f3051-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5cgmt.mount: Deactivated successfully. Sep 9 06:59:17.380915 kubelet[2900]: I0909 06:59:17.357846 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8997eb8f-a6dc-4e83-a56c-39f2ec7f3051" (UID: "8997eb8f-a6dc-4e83-a56c-39f2ec7f3051"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 06:59:17.381489 kubelet[2900]: I0909 06:59:17.381437 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-kube-api-access-5cgmt" (OuterVolumeSpecName: "kube-api-access-5cgmt") pod "8997eb8f-a6dc-4e83-a56c-39f2ec7f3051" (UID: "8997eb8f-a6dc-4e83-a56c-39f2ec7f3051"). InnerVolumeSpecName "kube-api-access-5cgmt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 06:59:17.387297 kubelet[2900]: I0909 06:59:17.387250 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8997eb8f-a6dc-4e83-a56c-39f2ec7f3051" (UID: "8997eb8f-a6dc-4e83-a56c-39f2ec7f3051"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 06:59:17.453229 containerd[1614]: time="2025-09-09T06:59:17.453086413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"65b0cb581f31d95d06209bb2a5aa8ca8f92d8b6a85e1db37e10c6d867bf72fa8\" pid:4084 exit_status:1 exited_at:{seconds:1757401157 nanos:448342455}" Sep 9 06:59:17.473629 kubelet[2900]: I0909 06:59:17.473565 2900 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-ca-bundle\") on node \"srv-f5a1c.gb1.brightbox.com\" DevicePath \"\"" Sep 9 06:59:17.473629 kubelet[2900]: I0909 06:59:17.473620 2900 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-whisker-backend-key-pair\") on node \"srv-f5a1c.gb1.brightbox.com\" DevicePath \"\"" Sep 9 06:59:17.473629 kubelet[2900]: I0909 06:59:17.473637 2900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cgmt\" (UniqueName: \"kubernetes.io/projected/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051-kube-api-access-5cgmt\") on node \"srv-f5a1c.gb1.brightbox.com\" DevicePath \"\"" Sep 9 06:59:17.517241 systemd[1]: var-lib-kubelet-pods-8997eb8f\x2da6dc\x2d4e83\x2da56c\x2d39f2ec7f3051-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 06:59:17.897731 systemd[1]: Removed slice kubepods-besteffort-pod8997eb8f_a6dc_4e83_a56c_39f2ec7f3051.slice - libcontainer container kubepods-besteffort-pod8997eb8f_a6dc_4e83_a56c_39f2ec7f3051.slice. Sep 9 06:59:18.033426 systemd[1]: Created slice kubepods-besteffort-pod7943998f_848a_4376_8475_02bcdcecc0b1.slice - libcontainer container kubepods-besteffort-pod7943998f_848a_4376_8475_02bcdcecc0b1.slice. Sep 9 06:59:18.086320 kubelet[2900]: I0909 06:59:18.086201 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wnt\" (UniqueName: \"kubernetes.io/projected/7943998f-848a-4376-8475-02bcdcecc0b1-kube-api-access-f8wnt\") pod \"whisker-7dc7c5f8ff-zcb9c\" (UID: \"7943998f-848a-4376-8475-02bcdcecc0b1\") " pod="calico-system/whisker-7dc7c5f8ff-zcb9c" Sep 9 06:59:18.087258 kubelet[2900]: I0909 06:59:18.087025 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7943998f-848a-4376-8475-02bcdcecc0b1-whisker-backend-key-pair\") pod \"whisker-7dc7c5f8ff-zcb9c\" (UID: \"7943998f-848a-4376-8475-02bcdcecc0b1\") " pod="calico-system/whisker-7dc7c5f8ff-zcb9c" Sep 9 06:59:18.087258 kubelet[2900]: I0909 06:59:18.087172 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7943998f-848a-4376-8475-02bcdcecc0b1-whisker-ca-bundle\") pod \"whisker-7dc7c5f8ff-zcb9c\" (UID: \"7943998f-848a-4376-8475-02bcdcecc0b1\") " pod="calico-system/whisker-7dc7c5f8ff-zcb9c" Sep 9 06:59:18.153627 containerd[1614]: time="2025-09-09T06:59:18.152792071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"47e43f8f70d41fc75008dc3dd6f913212503985fa57795ebf2a4f53a1c8bab10\" pid:4129 exit_status:1 exited_at:{seconds:1757401158 nanos:152341898}" Sep 9 06:59:18.338738 containerd[1614]: time="2025-09-09T06:59:18.338662950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc7c5f8ff-zcb9c,Uid:7943998f-848a-4376-8475-02bcdcecc0b1,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:18.477281 containerd[1614]: time="2025-09-09T06:59:18.477112452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-nrdkx,Uid:c0424c92-a0a6-4bbb-be3b-6682abebc6da,Namespace:calico-apiserver,Attempt:0,}" Sep 9 06:59:18.477739 containerd[1614]: time="2025-09-09T06:59:18.477132924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qmtg,Uid:f91ddf46-adf9-4330-b27d-18c1e9855030,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:18.482633 kubelet[2900]: I0909 06:59:18.482590 2900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8997eb8f-a6dc-4e83-a56c-39f2ec7f3051" path="/var/lib/kubelet/pods/8997eb8f-a6dc-4e83-a56c-39f2ec7f3051/volumes" Sep 9 06:59:18.779734 kubelet[2900]: I0909 06:59:18.778624 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 06:59:18.980147 systemd-networkd[1520]: cali922d8fc83ef: Link UP Sep 9 06:59:18.991465 systemd-networkd[1520]: cali922d8fc83ef: Gained carrier Sep 9 06:59:19.059649 containerd[1614]: 2025-09-09 06:59:18.550 [INFO][4161] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 06:59:19.059649 containerd[1614]: 2025-09-09 06:59:18.571 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0 calico-apiserver-8545c7b8c4- calico-apiserver c0424c92-a0a6-4bbb-be3b-6682abebc6da 814 0 2025-09-09 06:58:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8545c7b8c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com calico-apiserver-8545c7b8c4-nrdkx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali922d8fc83ef [] [] }} ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-" Sep 9 06:59:19.059649 containerd[1614]: 2025-09-09 06:59:18.571 [INFO][4161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.059649 containerd[1614]: 2025-09-09 06:59:18.807 [INFO][4186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" HandleID="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.809 [INFO][4186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" HandleID="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037a540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"calico-apiserver-8545c7b8c4-nrdkx", "timestamp":"2025-09-09 06:59:18.807541691 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.809 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.809 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.812 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.839 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.864 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.877 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.887 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.062580 containerd[1614]: 2025-09-09 06:59:18.892 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.892 [INFO][4186] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.897 [INFO][4186] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.911 [INFO][4186] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.920 [INFO][4186] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.1/26] block=192.168.104.0/26 handle="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.920 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.1/26] handle="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.921 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:19.064393 containerd[1614]: 2025-09-09 06:59:18.921 [INFO][4186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.1/26] IPv6=[] ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" HandleID="k8s-pod-network.91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.064746 containerd[1614]: 2025-09-09 06:59:18.934 [INFO][4161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0", GenerateName:"calico-apiserver-8545c7b8c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0424c92-a0a6-4bbb-be3b-6682abebc6da", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8545c7b8c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8545c7b8c4-nrdkx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali922d8fc83ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:19.064879 containerd[1614]: 2025-09-09 06:59:18.936 [INFO][4161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.1/32] ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.064879 containerd[1614]: 2025-09-09 06:59:18.936 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali922d8fc83ef ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.064879 containerd[1614]: 2025-09-09 06:59:19.009 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.066693 containerd[1614]: 2025-09-09 06:59:19.011 [INFO][4161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0", GenerateName:"calico-apiserver-8545c7b8c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0424c92-a0a6-4bbb-be3b-6682abebc6da", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8545c7b8c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d", Pod:"calico-apiserver-8545c7b8c4-nrdkx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali922d8fc83ef", MAC:"56:da:21:f4:94:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:19.067134 containerd[1614]: 2025-09-09 06:59:19.055 [INFO][4161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-nrdkx" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--nrdkx-eth0" Sep 9 06:59:19.116700 systemd-networkd[1520]: cali193b2ea4615: Link UP Sep 9 06:59:19.119190 systemd-networkd[1520]: cali193b2ea4615: Gained carrier Sep 9 06:59:19.210830 containerd[1614]: 2025-09-09 06:59:18.539 [INFO][4164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 06:59:19.210830 containerd[1614]: 2025-09-09 06:59:18.589 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0 csi-node-driver- calico-system f91ddf46-adf9-4330-b27d-18c1e9855030 697 0 2025-09-09 06:58:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com csi-node-driver-5qmtg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali193b2ea4615 [] [] }} ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-" Sep 9 06:59:19.210830 containerd[1614]: 2025-09-09 06:59:18.589 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.210830 containerd[1614]: 2025-09-09 06:59:18.809 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" HandleID="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Workload="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:18.810 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" HandleID="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Workload="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003584e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"csi-node-driver-5qmtg", "timestamp":"2025-09-09 06:59:18.807582116 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:18.810 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:18.921 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:18.923 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:18.960 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:19.024 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:19.040 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:19.044 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.211725 containerd[1614]: 2025-09-09 06:59:19.049 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.049 [INFO][4191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.057 [INFO][4191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878 Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.076 [INFO][4191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.092 [INFO][4191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.2/26] block=192.168.104.0/26 handle="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.092 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.2/26] handle="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.092 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:19.212653 containerd[1614]: 2025-09-09 06:59:19.092 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.2/26] IPv6=[] ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" HandleID="k8s-pod-network.4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Workload="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.212997 containerd[1614]: 2025-09-09 06:59:19.109 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f91ddf46-adf9-4330-b27d-18c1e9855030", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-5qmtg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali193b2ea4615", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:19.213354 containerd[1614]: 2025-09-09 06:59:19.109 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.2/32] ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.213354 containerd[1614]: 2025-09-09 06:59:19.109 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali193b2ea4615 ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.213354 containerd[1614]: 2025-09-09 06:59:19.127 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.213656 containerd[1614]: 2025-09-09 06:59:19.137 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f91ddf46-adf9-4330-b27d-18c1e9855030", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878", Pod:"csi-node-driver-5qmtg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali193b2ea4615", MAC:"82:b9:dc:14:d0:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:19.213810 containerd[1614]: 2025-09-09 06:59:19.178 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" Namespace="calico-system" Pod="csi-node-driver-5qmtg" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-csi--node--driver--5qmtg-eth0" Sep 9 06:59:19.312090 systemd-networkd[1520]: cali56552b7d817: Link UP Sep 9 06:59:19.314310 systemd-networkd[1520]: cali56552b7d817: Gained carrier Sep 9 06:59:19.387077 containerd[1614]: 2025-09-09 06:59:18.384 [INFO][4144] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 06:59:19.387077 containerd[1614]: 2025-09-09 06:59:18.420 [INFO][4144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0 whisker-7dc7c5f8ff- calico-system 7943998f-848a-4376-8475-02bcdcecc0b1 899 0 2025-09-09 06:59:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dc7c5f8ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com whisker-7dc7c5f8ff-zcb9c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali56552b7d817 [] [] }} ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-" Sep 9 06:59:19.387077 containerd[1614]: 2025-09-09 06:59:18.420 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.387077 containerd[1614]: 2025-09-09 06:59:18.808 [INFO][4156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" HandleID="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Workload="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:18.808 [INFO][4156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" HandleID="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Workload="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038dc80), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"whisker-7dc7c5f8ff-zcb9c", "timestamp":"2025-09-09 06:59:18.808421886 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:18.810 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.092 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.096 [INFO][4156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.119 [INFO][4156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.157 [INFO][4156] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.189 [INFO][4156] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.203 [INFO][4156] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387459 containerd[1614]: 2025-09-09 06:59:19.222 [INFO][4156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.227 [INFO][4156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.237 [INFO][4156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.249 [INFO][4156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.276 [INFO][4156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.3/26] block=192.168.104.0/26 handle="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.276 [INFO][4156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.3/26] handle="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.276 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:19.387890 containerd[1614]: 2025-09-09 06:59:19.276 [INFO][4156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.3/26] IPv6=[] ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" HandleID="k8s-pod-network.87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Workload="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.389413 containerd[1614]: 2025-09-09 06:59:19.292 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0", GenerateName:"whisker-7dc7c5f8ff-", Namespace:"calico-system", SelfLink:"", UID:"7943998f-848a-4376-8475-02bcdcecc0b1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dc7c5f8ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7dc7c5f8ff-zcb9c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali56552b7d817", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:19.389413 containerd[1614]: 2025-09-09 06:59:19.294 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.3/32] ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.389565 containerd[1614]: 2025-09-09 06:59:19.294 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56552b7d817 ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.389565 containerd[1614]: 2025-09-09 06:59:19.320 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.389666 containerd[1614]: 2025-09-09 06:59:19.323 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0", GenerateName:"whisker-7dc7c5f8ff-", Namespace:"calico-system", SelfLink:"", UID:"7943998f-848a-4376-8475-02bcdcecc0b1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dc7c5f8ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad", Pod:"whisker-7dc7c5f8ff-zcb9c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali56552b7d817", MAC:"6e:a5:61:44:7b:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:19.389761 containerd[1614]: 2025-09-09 06:59:19.367 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" Namespace="calico-system" Pod="whisker-7dc7c5f8ff-zcb9c" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-whisker--7dc7c5f8ff--zcb9c-eth0" Sep 9 06:59:19.481477 containerd[1614]: time="2025-09-09T06:59:19.480886666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57sfk,Uid:27857c03-7c50-4f43-a3c0-84aca71e183e,Namespace:kube-system,Attempt:0,}" Sep 9 06:59:19.506782 containerd[1614]: time="2025-09-09T06:59:19.505540953Z" level=info msg="connecting to shim 91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d" address="unix:///run/containerd/s/d1357c6c1880ee95bcf98715065a6082458eb47fa7800f31b2d1588a388ff256" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:19.518391 containerd[1614]: time="2025-09-09T06:59:19.518325852Z" level=info msg="connecting to shim 4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878" address="unix:///run/containerd/s/cca32cbee3e611cd913e143e9043d0d590f6b46e2486cfdea51c67ce3eed60dd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:19.602828 containerd[1614]: time="2025-09-09T06:59:19.602678373Z" level=info msg="connecting to shim 87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad" address="unix:///run/containerd/s/4ba28a15246529715de08cb8ad1af5f3df080ea3e37abbe4ff3e1a5f3095f514" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:19.683309 systemd[1]: Started cri-containerd-4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878.scope - libcontainer container 4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878. Sep 9 06:59:19.689279 systemd[1]: Started cri-containerd-91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d.scope - libcontainer container 91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d. Sep 9 06:59:19.824378 systemd[1]: Started cri-containerd-87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad.scope - libcontainer container 87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad. Sep 9 06:59:19.952941 containerd[1614]: time="2025-09-09T06:59:19.952878497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qmtg,Uid:f91ddf46-adf9-4330-b27d-18c1e9855030,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878\"" Sep 9 06:59:19.988908 containerd[1614]: time="2025-09-09T06:59:19.988613893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 06:59:20.032074 containerd[1614]: time="2025-09-09T06:59:20.031504000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc7c5f8ff-zcb9c,Uid:7943998f-848a-4376-8475-02bcdcecc0b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad\"" Sep 9 06:59:20.205333 systemd-networkd[1520]: cali45fa777a494: Link UP Sep 9 06:59:20.209638 systemd-networkd[1520]: cali45fa777a494: Gained carrier Sep 9 06:59:20.266289 containerd[1614]: time="2025-09-09T06:59:20.266208319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-nrdkx,Uid:c0424c92-a0a6-4bbb-be3b-6682abebc6da,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d\"" Sep 9 06:59:20.282129 containerd[1614]: 2025-09-09 06:59:19.844 [INFO][4365] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 06:59:20.282129 containerd[1614]: 2025-09-09 06:59:19.907 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0 coredns-668d6bf9bc- kube-system 27857c03-7c50-4f43-a3c0-84aca71e183e 813 0 2025-09-09 06:58:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com coredns-668d6bf9bc-57sfk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali45fa777a494 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-" Sep 9 06:59:20.282129 containerd[1614]: 2025-09-09 06:59:19.908 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.282129 containerd[1614]: 2025-09-09 06:59:20.054 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" HandleID="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Workload="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.060 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" HandleID="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Workload="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036e1e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-57sfk", "timestamp":"2025-09-09 06:59:20.049972208 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.060 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.060 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.060 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.076 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.091 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.102 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.115 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282516 containerd[1614]: 2025-09-09 06:59:20.124 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.124 [INFO][4471] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.137 [INFO][4471] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.150 [INFO][4471] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.166 [INFO][4471] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.4/26] block=192.168.104.0/26 handle="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.166 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.4/26] handle="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.166 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:20.282895 containerd[1614]: 2025-09-09 06:59:20.166 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.4/26] IPv6=[] ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" HandleID="k8s-pod-network.cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Workload="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.285201 containerd[1614]: 2025-09-09 06:59:20.181 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"27857c03-7c50-4f43-a3c0-84aca71e183e", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-57sfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali45fa777a494", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:20.285201 containerd[1614]: 2025-09-09 06:59:20.183 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.4/32] ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.285201 containerd[1614]: 2025-09-09 06:59:20.183 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45fa777a494 ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.285201 containerd[1614]: 2025-09-09 06:59:20.212 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.285201 containerd[1614]: 2025-09-09 06:59:20.213 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"27857c03-7c50-4f43-a3c0-84aca71e183e", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd", Pod:"coredns-668d6bf9bc-57sfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali45fa777a494", MAC:"ea:e5:a0:5b:e8:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:20.285201 containerd[1614]: 2025-09-09 06:59:20.264 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-57sfk" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--57sfk-eth0" Sep 9 06:59:20.322723 systemd-networkd[1520]: cali193b2ea4615: Gained IPv6LL Sep 9 06:59:20.345961 containerd[1614]: time="2025-09-09T06:59:20.345885345Z" level=info msg="connecting to shim cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd" address="unix:///run/containerd/s/903d5262dedad417eebb7260a4d7eedd672617086d564b0231a7a544c6ce7ae5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:20.397606 systemd[1]: Started cri-containerd-cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd.scope - libcontainer container cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd. Sep 9 06:59:20.513097 containerd[1614]: time="2025-09-09T06:59:20.512942791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57sfk,Uid:27857c03-7c50-4f43-a3c0-84aca71e183e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd\"" Sep 9 06:59:20.549841 containerd[1614]: time="2025-09-09T06:59:20.549785309Z" level=info msg="CreateContainer within sandbox \"cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 06:59:20.572167 systemd-networkd[1520]: cali922d8fc83ef: Gained IPv6LL Sep 9 06:59:20.576095 containerd[1614]: time="2025-09-09T06:59:20.575997341Z" level=info msg="Container 3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:20.582140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432588750.mount: Deactivated successfully. Sep 9 06:59:20.588656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886366729.mount: Deactivated successfully. Sep 9 06:59:20.591923 containerd[1614]: time="2025-09-09T06:59:20.590633107Z" level=info msg="CreateContainer within sandbox \"cea530c411c0cc6c4f6489df192f08f00cad94cf032854851e4311d31d51d9bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718\"" Sep 9 06:59:20.594885 containerd[1614]: time="2025-09-09T06:59:20.594838062Z" level=info msg="StartContainer for \"3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718\"" Sep 9 06:59:20.598306 containerd[1614]: time="2025-09-09T06:59:20.598225773Z" level=info msg="connecting to shim 3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718" address="unix:///run/containerd/s/903d5262dedad417eebb7260a4d7eedd672617086d564b0231a7a544c6ce7ae5" protocol=ttrpc version=3 Sep 9 06:59:20.659295 systemd[1]: Started cri-containerd-3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718.scope - libcontainer container 3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718. Sep 9 06:59:20.736217 containerd[1614]: time="2025-09-09T06:59:20.736168129Z" level=info msg="StartContainer for \"3b8fd329a0f172f5807f2f69d1cb958958111bbe69de4ec2873a846a99aff718\" returns successfully" Sep 9 06:59:20.749683 containerd[1614]: time="2025-09-09T06:59:20.749630673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"c229415f221cf934568f423756a836164dace35fe97ee6750486e3f526ea3090\" pid:4239 exit_status:1 exited_at:{seconds:1757401160 nanos:747629039}" Sep 9 06:59:20.978847 kubelet[2900]: I0909 06:59:20.973273 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-57sfk" podStartSLOduration=47.973190879 podStartE2EDuration="47.973190879s" podCreationTimestamp="2025-09-09 06:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 06:59:20.972167498 +0000 UTC m=+54.769344082" watchObservedRunningTime="2025-09-09 06:59:20.973190879 +0000 UTC m=+54.770367452" Sep 9 06:59:21.211229 systemd-networkd[1520]: cali56552b7d817: Gained IPv6LL Sep 9 06:59:21.238748 systemd-networkd[1520]: vxlan.calico: Link UP Sep 9 06:59:21.238761 systemd-networkd[1520]: vxlan.calico: Gained carrier Sep 9 06:59:21.403254 systemd-networkd[1520]: cali45fa777a494: Gained IPv6LL Sep 9 06:59:22.392986 containerd[1614]: time="2025-09-09T06:59:22.391972089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:22.392986 containerd[1614]: time="2025-09-09T06:59:22.392929955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 06:59:22.393785 containerd[1614]: time="2025-09-09T06:59:22.393750273Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:22.395913 containerd[1614]: time="2025-09-09T06:59:22.395880280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:22.396940 containerd[1614]: time="2025-09-09T06:59:22.396893707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.408014894s" Sep 9 06:59:22.397552 containerd[1614]: time="2025-09-09T06:59:22.396944829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 06:59:22.403256 containerd[1614]: time="2025-09-09T06:59:22.403196998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 06:59:22.413258 containerd[1614]: time="2025-09-09T06:59:22.413099114Z" level=info msg="CreateContainer within sandbox \"4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 06:59:22.427364 containerd[1614]: time="2025-09-09T06:59:22.427320486Z" level=info msg="Container 6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:22.434671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173544927.mount: Deactivated successfully. Sep 9 06:59:22.448318 containerd[1614]: time="2025-09-09T06:59:22.448256812Z" level=info msg="CreateContainer within sandbox \"4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e\"" Sep 9 06:59:22.449903 containerd[1614]: time="2025-09-09T06:59:22.449866193Z" level=info msg="StartContainer for \"6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e\"" Sep 9 06:59:22.452588 containerd[1614]: time="2025-09-09T06:59:22.452513797Z" level=info msg="connecting to shim 6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e" address="unix:///run/containerd/s/cca32cbee3e611cd913e143e9043d0d590f6b46e2486cfdea51c67ce3eed60dd" protocol=ttrpc version=3 Sep 9 06:59:22.491271 systemd[1]: Started cri-containerd-6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e.scope - libcontainer container 6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e. Sep 9 06:59:22.555265 systemd-networkd[1520]: vxlan.calico: Gained IPv6LL Sep 9 06:59:22.615776 containerd[1614]: time="2025-09-09T06:59:22.615689237Z" level=info msg="StartContainer for \"6a9a5a19c1e483c4197e416a996b9572dcdcdba72657b4754de047274a7a783e\" returns successfully" Sep 9 06:59:24.552073 containerd[1614]: time="2025-09-09T06:59:24.551776732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:24.558449 containerd[1614]: time="2025-09-09T06:59:24.558403186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 06:59:24.559555 containerd[1614]: time="2025-09-09T06:59:24.559485746Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:24.562473 containerd[1614]: time="2025-09-09T06:59:24.562411023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:24.563956 containerd[1614]: time="2025-09-09T06:59:24.563436219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.159053575s" Sep 9 06:59:24.563956 containerd[1614]: time="2025-09-09T06:59:24.563479544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 06:59:24.566181 containerd[1614]: time="2025-09-09T06:59:24.565944726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 06:59:24.568905 containerd[1614]: time="2025-09-09T06:59:24.568039422Z" level=info msg="CreateContainer within sandbox \"87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 06:59:24.579194 containerd[1614]: time="2025-09-09T06:59:24.579138958Z" level=info msg="Container 2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:24.590857 containerd[1614]: time="2025-09-09T06:59:24.590791040Z" level=info msg="CreateContainer within sandbox \"87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196\"" Sep 9 06:59:24.591804 containerd[1614]: time="2025-09-09T06:59:24.591748817Z" level=info msg="StartContainer for \"2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196\"" Sep 9 06:59:24.595318 containerd[1614]: time="2025-09-09T06:59:24.595267400Z" level=info msg="connecting to shim 2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196" address="unix:///run/containerd/s/4ba28a15246529715de08cb8ad1af5f3df080ea3e37abbe4ff3e1a5f3095f514" protocol=ttrpc version=3 Sep 9 06:59:24.638280 systemd[1]: Started cri-containerd-2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196.scope - libcontainer container 2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196. Sep 9 06:59:24.742832 containerd[1614]: time="2025-09-09T06:59:24.742483357Z" level=info msg="StartContainer for \"2fd5bbb83ca3dad0e85006c985df0dae282d382074acc88bdfe338034b542196\" returns successfully" Sep 9 06:59:26.699375 systemd[1]: Started sshd@11-10.230.42.222:22-123.58.213.127:42728.service - OpenSSH per-connection server daemon (123.58.213.127:42728). Sep 9 06:59:27.476593 containerd[1614]: time="2025-09-09T06:59:27.476520533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-5vp24,Uid:d119e097-5ad4-4aa5-af1d-a78edabdf1b7,Namespace:calico-apiserver,Attempt:0,}" Sep 9 06:59:27.689701 systemd-networkd[1520]: cali4a456845645: Link UP Sep 9 06:59:27.692354 systemd-networkd[1520]: cali4a456845645: Gained carrier Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.563 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0 calico-apiserver-8545c7b8c4- calico-apiserver d119e097-5ad4-4aa5-af1d-a78edabdf1b7 817 0 2025-09-09 06:58:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8545c7b8c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com calico-apiserver-8545c7b8c4-5vp24 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a456845645 [] [] }} ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.563 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.623 [INFO][4796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" HandleID="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.623 [INFO][4796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" HandleID="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"calico-apiserver-8545c7b8c4-5vp24", "timestamp":"2025-09-09 06:59:27.623572213 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.623 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.624 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.624 [INFO][4796] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.636 [INFO][4796] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.643 [INFO][4796] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.649 [INFO][4796] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.652 [INFO][4796] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.657 [INFO][4796] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.657 [INFO][4796] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.660 [INFO][4796] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.667 [INFO][4796] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.678 [INFO][4796] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.5/26] block=192.168.104.0/26 handle="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.678 [INFO][4796] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.5/26] handle="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.678 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:27.728178 containerd[1614]: 2025-09-09 06:59:27.679 [INFO][4796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.5/26] IPv6=[] ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" HandleID="k8s-pod-network.41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.731071 containerd[1614]: 2025-09-09 06:59:27.683 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0", GenerateName:"calico-apiserver-8545c7b8c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d119e097-5ad4-4aa5-af1d-a78edabdf1b7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8545c7b8c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8545c7b8c4-5vp24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a456845645", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:27.731071 containerd[1614]: 2025-09-09 06:59:27.683 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.5/32] ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.731071 containerd[1614]: 2025-09-09 06:59:27.683 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a456845645 ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.731071 containerd[1614]: 2025-09-09 06:59:27.693 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.731071 containerd[1614]: 2025-09-09 06:59:27.694 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0", GenerateName:"calico-apiserver-8545c7b8c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d119e097-5ad4-4aa5-af1d-a78edabdf1b7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8545c7b8c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e", Pod:"calico-apiserver-8545c7b8c4-5vp24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a456845645", MAC:"46:62:09:de:84:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:27.731071 containerd[1614]: 2025-09-09 06:59:27.711 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" Namespace="calico-apiserver" Pod="calico-apiserver-8545c7b8c4-5vp24" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--apiserver--8545c7b8c4--5vp24-eth0" Sep 9 06:59:27.765752 containerd[1614]: time="2025-09-09T06:59:27.765690055Z" level=info msg="connecting to shim 41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e" address="unix:///run/containerd/s/55409ec8211f1b3c9b015ed4b2abf1985ae5172220d51698dce090232d95a98a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:27.844375 systemd[1]: Started cri-containerd-41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e.scope - libcontainer container 41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e. Sep 9 06:59:27.939939 containerd[1614]: time="2025-09-09T06:59:27.939863877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8545c7b8c4-5vp24,Uid:d119e097-5ad4-4aa5-af1d-a78edabdf1b7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e\"" Sep 9 06:59:28.065022 sshd[4771]: Received disconnect from 123.58.213.127 port 42728:11: Bye Bye [preauth] Sep 9 06:59:28.065022 sshd[4771]: Disconnected from authenticating user root 123.58.213.127 port 42728 [preauth] Sep 9 06:59:28.070458 systemd[1]: sshd@11-10.230.42.222:22-123.58.213.127:42728.service: Deactivated successfully. Sep 9 06:59:29.275271 systemd-networkd[1520]: cali4a456845645: Gained IPv6LL Sep 9 06:59:29.482007 containerd[1614]: time="2025-09-09T06:59:29.481938876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hp4tc,Uid:bcbf9709-7836-4b82-a534-49bb731fb071,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:29.696243 systemd-networkd[1520]: cali141f37adca4: Link UP Sep 9 06:59:29.700494 systemd-networkd[1520]: cali141f37adca4: Gained carrier Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.566 [INFO][4863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0 goldmane-54d579b49d- calico-system bcbf9709-7836-4b82-a534-49bb731fb071 816 0 2025-09-09 06:58:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com goldmane-54d579b49d-hp4tc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali141f37adca4 [] [] }} ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.566 [INFO][4863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.608 [INFO][4874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" HandleID="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Workload="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.609 [INFO][4874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" HandleID="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Workload="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efb0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"goldmane-54d579b49d-hp4tc", "timestamp":"2025-09-09 06:59:29.608765893 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.609 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.610 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.610 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.622 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.634 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.643 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.646 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.654 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.654 [INFO][4874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.658 [INFO][4874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8 Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.667 [INFO][4874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.681 [INFO][4874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.6/26] block=192.168.104.0/26 handle="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.682 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.6/26] handle="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.682 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:29.746212 containerd[1614]: 2025-09-09 06:59:29.682 [INFO][4874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.6/26] IPv6=[] ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" HandleID="k8s-pod-network.aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Workload="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.748891 containerd[1614]: 2025-09-09 06:59:29.688 [INFO][4863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bcbf9709-7836-4b82-a534-49bb731fb071", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-54d579b49d-hp4tc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali141f37adca4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:29.748891 containerd[1614]: 2025-09-09 06:59:29.689 [INFO][4863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.6/32] ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.748891 containerd[1614]: 2025-09-09 06:59:29.689 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali141f37adca4 ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.748891 containerd[1614]: 2025-09-09 06:59:29.699 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.748891 containerd[1614]: 2025-09-09 06:59:29.700 [INFO][4863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bcbf9709-7836-4b82-a534-49bb731fb071", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8", Pod:"goldmane-54d579b49d-hp4tc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali141f37adca4", MAC:"52:1d:d7:ab:78:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:29.748891 containerd[1614]: 2025-09-09 06:59:29.738 [INFO][4863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" Namespace="calico-system" Pod="goldmane-54d579b49d-hp4tc" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-goldmane--54d579b49d--hp4tc-eth0" Sep 9 06:59:29.818862 containerd[1614]: time="2025-09-09T06:59:29.818734352Z" level=info msg="connecting to shim aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8" address="unix:///run/containerd/s/06ba2677912d3412836b0655619fcd94acc3fa6d89a6ca31817032c000459f97" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:29.880616 systemd[1]: Started cri-containerd-aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8.scope - libcontainer container aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8. Sep 9 06:59:30.001567 containerd[1614]: time="2025-09-09T06:59:30.001095113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hp4tc,Uid:bcbf9709-7836-4b82-a534-49bb731fb071,Namespace:calico-system,Attempt:0,} returns sandbox id \"aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8\"" Sep 9 06:59:31.192210 containerd[1614]: time="2025-09-09T06:59:31.192140454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:31.193717 containerd[1614]: time="2025-09-09T06:59:31.193501570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 06:59:31.194435 containerd[1614]: time="2025-09-09T06:59:31.194395786Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:31.198181 containerd[1614]: time="2025-09-09T06:59:31.198109574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:31.199530 containerd[1614]: time="2025-09-09T06:59:31.199075126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.633087518s" Sep 9 06:59:31.199530 containerd[1614]: time="2025-09-09T06:59:31.199158477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 06:59:31.200971 containerd[1614]: time="2025-09-09T06:59:31.200934334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 06:59:31.207543 containerd[1614]: time="2025-09-09T06:59:31.207262504Z" level=info msg="CreateContainer within sandbox \"91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 06:59:31.220920 containerd[1614]: time="2025-09-09T06:59:31.219327950Z" level=info msg="Container 398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:31.228407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651698375.mount: Deactivated successfully. Sep 9 06:59:31.233152 containerd[1614]: time="2025-09-09T06:59:31.233103079Z" level=info msg="CreateContainer within sandbox \"91da013b5357967a824fd2d5916d28d343f5635f120e2754ea5fba018d7eec3d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d\"" Sep 9 06:59:31.235086 containerd[1614]: time="2025-09-09T06:59:31.234225906Z" level=info msg="StartContainer for \"398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d\"" Sep 9 06:59:31.236327 containerd[1614]: time="2025-09-09T06:59:31.236265266Z" level=info msg="connecting to shim 398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d" address="unix:///run/containerd/s/d1357c6c1880ee95bcf98715065a6082458eb47fa7800f31b2d1588a388ff256" protocol=ttrpc version=3 Sep 9 06:59:31.282405 systemd[1]: Started cri-containerd-398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d.scope - libcontainer container 398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d. Sep 9 06:59:31.323305 systemd-networkd[1520]: cali141f37adca4: Gained IPv6LL Sep 9 06:59:31.375898 containerd[1614]: time="2025-09-09T06:59:31.375765935Z" level=info msg="StartContainer for \"398c688c7d9036ac51b5eed263ac70607d42ee820be9e23750055fceef14943d\" returns successfully" Sep 9 06:59:31.480608 containerd[1614]: time="2025-09-09T06:59:31.479533923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6947957bb4-v59sm,Uid:be4f2c0b-d13d-4c9e-895f-07176896952f,Namespace:calico-system,Attempt:0,}" Sep 9 06:59:31.480608 containerd[1614]: time="2025-09-09T06:59:31.479694352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b7fvp,Uid:734ec5de-cedd-46dc-a628-24b6dafad37c,Namespace:kube-system,Attempt:0,}" Sep 9 06:59:31.826949 systemd-networkd[1520]: calif4fe5342800: Link UP Sep 9 06:59:31.827333 systemd-networkd[1520]: calif4fe5342800: Gained carrier Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.630 [INFO][4980] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0 coredns-668d6bf9bc- kube-system 734ec5de-cedd-46dc-a628-24b6dafad37c 810 0 2025-09-09 06:58:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com coredns-668d6bf9bc-b7fvp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4fe5342800 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.632 [INFO][4980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.733 [INFO][5008] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" HandleID="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Workload="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.734 [INFO][5008] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" HandleID="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Workload="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b39b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-b7fvp", "timestamp":"2025-09-09 06:59:31.73298319 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.734 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.734 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.735 [INFO][5008] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.761 [INFO][5008] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.771 [INFO][5008] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.783 [INFO][5008] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.787 [INFO][5008] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.790 [INFO][5008] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.790 [INFO][5008] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.792 [INFO][5008] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286 Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.801 [INFO][5008] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.808 [INFO][5008] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.7/26] block=192.168.104.0/26 handle="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.808 [INFO][5008] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.7/26] handle="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.809 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:31.865612 containerd[1614]: 2025-09-09 06:59:31.809 [INFO][5008] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.7/26] IPv6=[] ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" HandleID="k8s-pod-network.88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Workload="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.869798 containerd[1614]: 2025-09-09 06:59:31.814 [INFO][4980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"734ec5de-cedd-46dc-a628-24b6dafad37c", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-b7fvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4fe5342800", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:31.869798 containerd[1614]: 2025-09-09 06:59:31.814 [INFO][4980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.7/32] ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.869798 containerd[1614]: 2025-09-09 06:59:31.815 [INFO][4980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4fe5342800 ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.869798 containerd[1614]: 2025-09-09 06:59:31.827 [INFO][4980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.869798 containerd[1614]: 2025-09-09 06:59:31.828 [INFO][4980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"734ec5de-cedd-46dc-a628-24b6dafad37c", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286", Pod:"coredns-668d6bf9bc-b7fvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4fe5342800", MAC:"4e:93:11:83:f9:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:31.869798 containerd[1614]: 2025-09-09 06:59:31.859 [INFO][4980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" Namespace="kube-system" Pod="coredns-668d6bf9bc-b7fvp" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b7fvp-eth0" Sep 9 06:59:31.948808 systemd-networkd[1520]: cali7a044503f84: Link UP Sep 9 06:59:31.950787 systemd-networkd[1520]: cali7a044503f84: Gained carrier Sep 9 06:59:31.991383 containerd[1614]: time="2025-09-09T06:59:31.991300071Z" level=info msg="connecting to shim 88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286" address="unix:///run/containerd/s/3a0106168bbe9488e2f28fbcd8f8f0cff250263c2c8670396ba6c553170c5b33" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.631 [INFO][4983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0 calico-kube-controllers-6947957bb4- calico-system be4f2c0b-d13d-4c9e-895f-07176896952f 805 0 2025-09-09 06:58:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6947957bb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-f5a1c.gb1.brightbox.com calico-kube-controllers-6947957bb4-v59sm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7a044503f84 [] [] }} ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.633 [INFO][4983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.735 [INFO][5006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" HandleID="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.735 [INFO][5006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" HandleID="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d740), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f5a1c.gb1.brightbox.com", "pod":"calico-kube-controllers-6947957bb4-v59sm", "timestamp":"2025-09-09 06:59:31.735098224 +0000 UTC"}, Hostname:"srv-f5a1c.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.736 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.809 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.810 [INFO][5006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f5a1c.gb1.brightbox.com' Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.861 [INFO][5006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.878 [INFO][5006] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.898 [INFO][5006] ipam/ipam.go 511: Trying affinity for 192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.905 [INFO][5006] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.911 [INFO][5006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.0/26 host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.911 [INFO][5006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.104.0/26 handle="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.915 [INFO][5006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.921 [INFO][5006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.104.0/26 handle="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.935 [INFO][5006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.104.8/26] block=192.168.104.0/26 handle="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.935 [INFO][5006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.8/26] handle="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" host="srv-f5a1c.gb1.brightbox.com" Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.935 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 06:59:32.004554 containerd[1614]: 2025-09-09 06:59:31.936 [INFO][5006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.104.8/26] IPv6=[] ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" HandleID="k8s-pod-network.c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Workload="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.015768 containerd[1614]: 2025-09-09 06:59:31.940 [INFO][4983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0", GenerateName:"calico-kube-controllers-6947957bb4-", Namespace:"calico-system", SelfLink:"", UID:"be4f2c0b-d13d-4c9e-895f-07176896952f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6947957bb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6947957bb4-v59sm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a044503f84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:32.015768 containerd[1614]: 2025-09-09 06:59:31.940 [INFO][4983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.8/32] ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.015768 containerd[1614]: 2025-09-09 06:59:31.940 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a044503f84 ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.015768 containerd[1614]: 2025-09-09 06:59:31.946 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.015768 containerd[1614]: 2025-09-09 06:59:31.947 [INFO][4983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0", GenerateName:"calico-kube-controllers-6947957bb4-", Namespace:"calico-system", SelfLink:"", UID:"be4f2c0b-d13d-4c9e-895f-07176896952f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 6, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6947957bb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f5a1c.gb1.brightbox.com", ContainerID:"c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba", Pod:"calico-kube-controllers-6947957bb4-v59sm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a044503f84", MAC:"52:11:b8:21:ee:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 06:59:32.015768 containerd[1614]: 2025-09-09 06:59:31.979 [INFO][4983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" Namespace="calico-system" Pod="calico-kube-controllers-6947957bb4-v59sm" WorkloadEndpoint="srv--f5a1c.gb1.brightbox.com-k8s-calico--kube--controllers--6947957bb4--v59sm-eth0" Sep 9 06:59:32.105860 containerd[1614]: time="2025-09-09T06:59:32.105182397Z" level=info msg="connecting to shim c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba" address="unix:///run/containerd/s/bd8e22e43ae93f19d858e3e7bde733d45fe41e148c020fa4f861ee307b24eb31" namespace=k8s.io protocol=ttrpc version=3 Sep 9 06:59:32.138429 systemd[1]: Started cri-containerd-88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286.scope - libcontainer container 88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286. Sep 9 06:59:32.174299 systemd[1]: Started cri-containerd-c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba.scope - libcontainer container c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba. Sep 9 06:59:32.310710 containerd[1614]: time="2025-09-09T06:59:32.310651936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b7fvp,Uid:734ec5de-cedd-46dc-a628-24b6dafad37c,Namespace:kube-system,Attempt:0,} returns sandbox id \"88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286\"" Sep 9 06:59:32.319122 containerd[1614]: time="2025-09-09T06:59:32.318744101Z" level=info msg="CreateContainer within sandbox \"88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 06:59:32.336922 containerd[1614]: time="2025-09-09T06:59:32.336857992Z" level=info msg="Container 01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:32.345121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035974079.mount: Deactivated successfully. Sep 9 06:59:32.358665 containerd[1614]: time="2025-09-09T06:59:32.358421330Z" level=info msg="CreateContainer within sandbox \"88c506be31074d95ca6e5ea79f36de1391e8d6f7596fa6e7ce8beebeb9fe1286\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af\"" Sep 9 06:59:32.361495 containerd[1614]: time="2025-09-09T06:59:32.360420682Z" level=info msg="StartContainer for \"01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af\"" Sep 9 06:59:32.362924 containerd[1614]: time="2025-09-09T06:59:32.362628922Z" level=info msg="connecting to shim 01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af" address="unix:///run/containerd/s/3a0106168bbe9488e2f28fbcd8f8f0cff250263c2c8670396ba6c553170c5b33" protocol=ttrpc version=3 Sep 9 06:59:32.459490 systemd[1]: Started cri-containerd-01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af.scope - libcontainer container 01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af. Sep 9 06:59:32.474162 containerd[1614]: time="2025-09-09T06:59:32.474093443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6947957bb4-v59sm,Uid:be4f2c0b-d13d-4c9e-895f-07176896952f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba\"" Sep 9 06:59:32.548657 containerd[1614]: time="2025-09-09T06:59:32.548590654Z" level=info msg="StartContainer for \"01c3b5f49c4965fac552f7316922abf64f9fb777b1256ae86ddf360b1eb191af\" returns successfully" Sep 9 06:59:33.050087 kubelet[2900]: I0909 06:59:33.049823 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 06:59:33.073754 kubelet[2900]: I0909 06:59:33.073623 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b7fvp" podStartSLOduration=60.073598105 podStartE2EDuration="1m0.073598105s" podCreationTimestamp="2025-09-09 06:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 06:59:33.072555712 +0000 UTC m=+66.869732296" watchObservedRunningTime="2025-09-09 06:59:33.073598105 +0000 UTC m=+66.870774690" Sep 9 06:59:33.075219 kubelet[2900]: I0909 06:59:33.074773 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8545c7b8c4-nrdkx" podStartSLOduration=37.146808924 podStartE2EDuration="48.074762778s" podCreationTimestamp="2025-09-09 06:58:45 +0000 UTC" firstStartedPulling="2025-09-09 06:59:20.272788735 +0000 UTC m=+54.069965305" lastFinishedPulling="2025-09-09 06:59:31.200742586 +0000 UTC m=+64.997919159" observedRunningTime="2025-09-09 06:59:32.16130626 +0000 UTC m=+65.958482840" watchObservedRunningTime="2025-09-09 06:59:33.074762778 +0000 UTC m=+66.871939353" Sep 9 06:59:33.691433 systemd-networkd[1520]: calif4fe5342800: Gained IPv6LL Sep 9 06:59:33.693801 systemd-networkd[1520]: cali7a044503f84: Gained IPv6LL Sep 9 06:59:34.273171 containerd[1614]: time="2025-09-09T06:59:34.273097837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:34.274602 containerd[1614]: time="2025-09-09T06:59:34.274369488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 06:59:34.276229 containerd[1614]: time="2025-09-09T06:59:34.276178962Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:34.278526 containerd[1614]: time="2025-09-09T06:59:34.278486800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:34.279704 containerd[1614]: time="2025-09-09T06:59:34.279663143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.078686153s" Sep 9 06:59:34.279791 containerd[1614]: time="2025-09-09T06:59:34.279709577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 06:59:34.281842 containerd[1614]: time="2025-09-09T06:59:34.281549946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 06:59:34.285084 containerd[1614]: time="2025-09-09T06:59:34.285030704Z" level=info msg="CreateContainer within sandbox \"4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 06:59:34.315071 containerd[1614]: time="2025-09-09T06:59:34.311350017Z" level=info msg="Container be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:34.321217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265754192.mount: Deactivated successfully. Sep 9 06:59:34.351165 containerd[1614]: time="2025-09-09T06:59:34.350997764Z" level=info msg="CreateContainer within sandbox \"4e24b3ba25f4ae1881724081395b56ed7812f9e09af9e6d9f991a5b370968878\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e\"" Sep 9 06:59:34.355233 containerd[1614]: time="2025-09-09T06:59:34.355180732Z" level=info msg="StartContainer for \"be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e\"" Sep 9 06:59:34.357671 containerd[1614]: time="2025-09-09T06:59:34.357640067Z" level=info msg="connecting to shim be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e" address="unix:///run/containerd/s/cca32cbee3e611cd913e143e9043d0d590f6b46e2486cfdea51c67ce3eed60dd" protocol=ttrpc version=3 Sep 9 06:59:34.396468 systemd[1]: Started cri-containerd-be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e.scope - libcontainer container be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e. Sep 9 06:59:34.478179 containerd[1614]: time="2025-09-09T06:59:34.478109059Z" level=info msg="StartContainer for \"be762d811e8e37a607a43a20651e50a4a2650ea060ce446a6b0cb69e6351001e\" returns successfully" Sep 9 06:59:34.811575 kubelet[2900]: I0909 06:59:34.811355 2900 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 06:59:34.814274 kubelet[2900]: I0909 06:59:34.814236 2900 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 06:59:35.115210 kubelet[2900]: I0909 06:59:35.114530 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5qmtg" podStartSLOduration=31.804291049 podStartE2EDuration="46.11448655s" podCreationTimestamp="2025-09-09 06:58:49 +0000 UTC" firstStartedPulling="2025-09-09 06:59:19.970700309 +0000 UTC m=+53.767876871" lastFinishedPulling="2025-09-09 06:59:34.280895793 +0000 UTC m=+68.078072372" observedRunningTime="2025-09-09 06:59:35.110754147 +0000 UTC m=+68.907930723" watchObservedRunningTime="2025-09-09 06:59:35.11448655 +0000 UTC m=+68.911663135" Sep 9 06:59:37.479289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927597351.mount: Deactivated successfully. Sep 9 06:59:37.593074 containerd[1614]: time="2025-09-09T06:59:37.592933147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:37.595705 containerd[1614]: time="2025-09-09T06:59:37.595667765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 06:59:37.597004 containerd[1614]: time="2025-09-09T06:59:37.596934574Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:37.599915 containerd[1614]: time="2025-09-09T06:59:37.599824331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:37.602129 containerd[1614]: time="2025-09-09T06:59:37.601103693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.319505077s" Sep 9 06:59:37.602129 containerd[1614]: time="2025-09-09T06:59:37.601189956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 06:59:37.604016 containerd[1614]: time="2025-09-09T06:59:37.603965577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 06:59:37.607510 containerd[1614]: time="2025-09-09T06:59:37.607403304Z" level=info msg="CreateContainer within sandbox \"87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 06:59:37.620353 containerd[1614]: time="2025-09-09T06:59:37.620312382Z" level=info msg="Container 4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:37.630971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940122301.mount: Deactivated successfully. Sep 9 06:59:37.639234 containerd[1614]: time="2025-09-09T06:59:37.639031947Z" level=info msg="CreateContainer within sandbox \"87a504d5d3123e0cc02746e2f8e6cc8448f8b8d7bd92475920aff2435e2151ad\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e\"" Sep 9 06:59:37.641094 containerd[1614]: time="2025-09-09T06:59:37.640903038Z" level=info msg="StartContainer for \"4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e\"" Sep 9 06:59:37.644856 containerd[1614]: time="2025-09-09T06:59:37.644820500Z" level=info msg="connecting to shim 4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e" address="unix:///run/containerd/s/4ba28a15246529715de08cb8ad1af5f3df080ea3e37abbe4ff3e1a5f3095f514" protocol=ttrpc version=3 Sep 9 06:59:37.687828 systemd[1]: Started cri-containerd-4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e.scope - libcontainer container 4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e. Sep 9 06:59:37.787224 containerd[1614]: time="2025-09-09T06:59:37.786374777Z" level=info msg="StartContainer for \"4a52e95e8d54c5776134af194e9fdfcac1e8237ff6772d61d64191d558569a4e\" returns successfully" Sep 9 06:59:38.040359 containerd[1614]: time="2025-09-09T06:59:38.039727328Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:38.041057 containerd[1614]: time="2025-09-09T06:59:38.041009377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 06:59:38.058793 containerd[1614]: time="2025-09-09T06:59:38.058725033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 454.71333ms" Sep 9 06:59:38.058793 containerd[1614]: time="2025-09-09T06:59:38.058789286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 06:59:38.063072 containerd[1614]: time="2025-09-09T06:59:38.061394727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 06:59:38.065766 containerd[1614]: time="2025-09-09T06:59:38.065355513Z" level=info msg="CreateContainer within sandbox \"41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 06:59:38.073697 containerd[1614]: time="2025-09-09T06:59:38.073659958Z" level=info msg="Container de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:38.102010 containerd[1614]: time="2025-09-09T06:59:38.101890262Z" level=info msg="CreateContainer within sandbox \"41f9a2012d90acb825e4798d4e9f48fb2a8a0547e5f2ad6e92c06a0f07be6b4e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e\"" Sep 9 06:59:38.105304 containerd[1614]: time="2025-09-09T06:59:38.105265741Z" level=info msg="StartContainer for \"de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e\"" Sep 9 06:59:38.139280 containerd[1614]: time="2025-09-09T06:59:38.139212892Z" level=info msg="connecting to shim de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e" address="unix:///run/containerd/s/55409ec8211f1b3c9b015ed4b2abf1985ae5172220d51698dce090232d95a98a" protocol=ttrpc version=3 Sep 9 06:59:38.187344 systemd[1]: Started cri-containerd-de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e.scope - libcontainer container de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e. Sep 9 06:59:38.219760 kubelet[2900]: I0909 06:59:38.219615 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7dc7c5f8ff-zcb9c" podStartSLOduration=3.741976461 podStartE2EDuration="21.219518563s" podCreationTimestamp="2025-09-09 06:59:17 +0000 UTC" firstStartedPulling="2025-09-09 06:59:20.125767876 +0000 UTC m=+53.922944444" lastFinishedPulling="2025-09-09 06:59:37.603309966 +0000 UTC m=+71.400486546" observedRunningTime="2025-09-09 06:59:38.216925116 +0000 UTC m=+72.014101698" watchObservedRunningTime="2025-09-09 06:59:38.219518563 +0000 UTC m=+72.016695138" Sep 9 06:59:38.329094 containerd[1614]: time="2025-09-09T06:59:38.327722292Z" level=info msg="StartContainer for \"de4525176404dd5314e777450ea0bc89c22b2e1b7412d2ee62c9c24ce3e3a62e\" returns successfully" Sep 9 06:59:39.217067 kubelet[2900]: I0909 06:59:39.216926 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8545c7b8c4-5vp24" podStartSLOduration=44.098103319 podStartE2EDuration="54.216887938s" podCreationTimestamp="2025-09-09 06:58:45 +0000 UTC" firstStartedPulling="2025-09-09 06:59:27.94193323 +0000 UTC m=+61.739109797" lastFinishedPulling="2025-09-09 06:59:38.060717836 +0000 UTC m=+71.857894416" observedRunningTime="2025-09-09 06:59:39.200766288 +0000 UTC m=+72.997942875" watchObservedRunningTime="2025-09-09 06:59:39.216887938 +0000 UTC m=+73.014064510" Sep 9 06:59:40.185361 kubelet[2900]: I0909 06:59:40.184960 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 06:59:42.751134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145166764.mount: Deactivated successfully. Sep 9 06:59:43.488753 kubelet[2900]: I0909 06:59:43.488239 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 06:59:44.099604 containerd[1614]: time="2025-09-09T06:59:44.099527434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:44.103193 containerd[1614]: time="2025-09-09T06:59:44.103156614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 06:59:44.108349 containerd[1614]: time="2025-09-09T06:59:44.108287648Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:44.112702 containerd[1614]: time="2025-09-09T06:59:44.112333616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:44.113643 containerd[1614]: time="2025-09-09T06:59:44.113610413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.052175135s" Sep 9 06:59:44.113795 containerd[1614]: time="2025-09-09T06:59:44.113768318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 06:59:44.125488 containerd[1614]: time="2025-09-09T06:59:44.125278776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 06:59:44.137090 containerd[1614]: time="2025-09-09T06:59:44.136915632Z" level=info msg="CreateContainer within sandbox \"aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 06:59:44.154455 containerd[1614]: time="2025-09-09T06:59:44.152079062Z" level=info msg="Container 45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:44.258168 containerd[1614]: time="2025-09-09T06:59:44.258079723Z" level=info msg="CreateContainer within sandbox \"aeb2ceed617ee7ffd4ce9bdf472de3409fb6900e57d8f3c89719d1eef6266de8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\"" Sep 9 06:59:44.259435 containerd[1614]: time="2025-09-09T06:59:44.259287595Z" level=info msg="StartContainer for \"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\"" Sep 9 06:59:44.263123 containerd[1614]: time="2025-09-09T06:59:44.262455069Z" level=info msg="connecting to shim 45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db" address="unix:///run/containerd/s/06ba2677912d3412836b0655619fcd94acc3fa6d89a6ca31817032c000459f97" protocol=ttrpc version=3 Sep 9 06:59:44.500402 systemd[1]: Started cri-containerd-45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db.scope - libcontainer container 45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db. Sep 9 06:59:44.597152 containerd[1614]: time="2025-09-09T06:59:44.597079320Z" level=info msg="StartContainer for \"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" returns successfully" Sep 9 06:59:45.375072 kubelet[2900]: I0909 06:59:45.371468 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hp4tc" podStartSLOduration=43.242183738 podStartE2EDuration="57.362686584s" podCreationTimestamp="2025-09-09 06:58:48 +0000 UTC" firstStartedPulling="2025-09-09 06:59:30.004429446 +0000 UTC m=+63.801606009" lastFinishedPulling="2025-09-09 06:59:44.124932285 +0000 UTC m=+77.922108855" observedRunningTime="2025-09-09 06:59:45.357553022 +0000 UTC m=+79.154729610" watchObservedRunningTime="2025-09-09 06:59:45.362686584 +0000 UTC m=+79.159863184" Sep 9 06:59:45.590090 containerd[1614]: time="2025-09-09T06:59:45.589151749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"f542e687dc175406d4c25d5de2d5f801f3eed4747375c8a46e36ceaffe4aab62\" pid:5373 exit_status:1 exited_at:{seconds:1757401185 nanos:572592450}" Sep 9 06:59:46.498066 containerd[1614]: time="2025-09-09T06:59:46.497968249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"b03917cc81318602d5853b47650bd72ab38af0966b4e861a63ab9c701b594001\" pid:5396 exited_at:{seconds:1757401186 nanos:497311979}" Sep 9 06:59:49.489773 containerd[1614]: time="2025-09-09T06:59:49.489441150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"c0e1683bd49cbe5cd6ea80c67a50fc004b3b063c5386d5495461f436295450a5\" pid:5425 exited_at:{seconds:1757401189 nanos:483207248}" Sep 9 06:59:51.174700 containerd[1614]: time="2025-09-09T06:59:51.174615744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:51.203933 containerd[1614]: time="2025-09-09T06:59:51.203856527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 06:59:51.205691 containerd[1614]: time="2025-09-09T06:59:51.205597787Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:51.209896 containerd[1614]: time="2025-09-09T06:59:51.209813923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 06:59:51.211282 containerd[1614]: time="2025-09-09T06:59:51.211237044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.085578236s" Sep 9 06:59:51.211493 containerd[1614]: time="2025-09-09T06:59:51.211317761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 06:59:51.457560 containerd[1614]: time="2025-09-09T06:59:51.456570315Z" level=info msg="CreateContainer within sandbox \"c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 06:59:51.518298 containerd[1614]: time="2025-09-09T06:59:51.518222099Z" level=info msg="Container ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5: CDI devices from CRI Config.CDIDevices: []" Sep 9 06:59:51.539516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677462890.mount: Deactivated successfully. Sep 9 06:59:51.549162 containerd[1614]: time="2025-09-09T06:59:51.544524614Z" level=info msg="CreateContainer within sandbox \"c1a78b1ce2fa68da13098a050303ce15445c2bcddb80299b7ab8e7f7647f2eba\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\"" Sep 9 06:59:51.550464 containerd[1614]: time="2025-09-09T06:59:51.550426820Z" level=info msg="StartContainer for \"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\"" Sep 9 06:59:51.558410 containerd[1614]: time="2025-09-09T06:59:51.558297457Z" level=info msg="connecting to shim ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5" address="unix:///run/containerd/s/bd8e22e43ae93f19d858e3e7bde733d45fe41e148c020fa4f861ee307b24eb31" protocol=ttrpc version=3 Sep 9 06:59:51.644584 systemd[1]: Started cri-containerd-ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5.scope - libcontainer container ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5. Sep 9 06:59:51.999247 containerd[1614]: time="2025-09-09T06:59:51.998991610Z" level=info msg="StartContainer for \"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" returns successfully" Sep 9 06:59:52.740289 kubelet[2900]: I0909 06:59:52.714608 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6947957bb4-v59sm" podStartSLOduration=44.956043624 podStartE2EDuration="1m3.690389689s" podCreationTimestamp="2025-09-09 06:58:49 +0000 UTC" firstStartedPulling="2025-09-09 06:59:32.47912067 +0000 UTC m=+66.276297232" lastFinishedPulling="2025-09-09 06:59:51.213466722 +0000 UTC m=+85.010643297" observedRunningTime="2025-09-09 06:59:52.684122053 +0000 UTC m=+86.481298642" watchObservedRunningTime="2025-09-09 06:59:52.690389689 +0000 UTC m=+86.487566263" Sep 9 06:59:52.873941 containerd[1614]: time="2025-09-09T06:59:52.873871099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"850d7bbae7471288bb8450fdb8ed744987b37ec92c5fcce74906fd05da62dc11\" pid:5499 exited_at:{seconds:1757401192 nanos:873109013}" Sep 9 06:59:57.352076 systemd[1]: Started sshd@12-10.230.42.222:22-139.178.68.195:59116.service - OpenSSH per-connection server daemon (139.178.68.195:59116). Sep 9 06:59:58.010093 kubelet[2900]: I0909 06:59:58.002432 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 06:59:58.417182 sshd[5529]: Accepted publickey for core from 139.178.68.195 port 59116 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 06:59:58.423095 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 06:59:58.449263 systemd-logind[1558]: New session 12 of user core. Sep 9 06:59:58.454345 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 06:59:59.697561 sshd[5534]: Connection closed by 139.178.68.195 port 59116 Sep 9 06:59:59.697392 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Sep 9 06:59:59.716400 systemd[1]: sshd@12-10.230.42.222:22-139.178.68.195:59116.service: Deactivated successfully. Sep 9 06:59:59.720005 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 06:59:59.721697 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Sep 9 06:59:59.725425 systemd-logind[1558]: Removed session 12. Sep 9 07:00:04.858575 systemd[1]: Started sshd@13-10.230.42.222:22-139.178.68.195:59726.service - OpenSSH per-connection server daemon (139.178.68.195:59726). Sep 9 07:00:05.839097 sshd[5556]: Accepted publickey for core from 139.178.68.195 port 59726 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:05.841775 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:05.854926 systemd-logind[1558]: New session 13 of user core. Sep 9 07:00:05.858968 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 07:00:06.788638 sshd[5559]: Connection closed by 139.178.68.195 port 59726 Sep 9 07:00:06.787455 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:06.796585 systemd[1]: sshd@13-10.230.42.222:22-139.178.68.195:59726.service: Deactivated successfully. Sep 9 07:00:06.801983 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 07:00:06.804261 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Sep 9 07:00:06.807093 systemd-logind[1558]: Removed session 13. Sep 9 07:00:07.532611 containerd[1614]: time="2025-09-09T07:00:07.532509844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"4d0dcd9f8234dc18fe40ac3681e8429efb8c7657c5feb606bf188fe182dd153f\" pid:5584 exited_at:{seconds:1757401207 nanos:493888444}" Sep 9 07:00:07.647061 systemd[1]: Started sshd@14-10.230.42.222:22-117.220.10.3:43244.service - OpenSSH per-connection server daemon (117.220.10.3:43244). Sep 9 07:00:09.484090 sshd[5595]: Received disconnect from 117.220.10.3 port 43244:11: Bye Bye [preauth] Sep 9 07:00:09.484090 sshd[5595]: Disconnected from authenticating user root 117.220.10.3 port 43244 [preauth] Sep 9 07:00:09.485488 systemd[1]: sshd@14-10.230.42.222:22-117.220.10.3:43244.service: Deactivated successfully. Sep 9 07:00:11.946143 systemd[1]: Started sshd@15-10.230.42.222:22-139.178.68.195:54642.service - OpenSSH per-connection server daemon (139.178.68.195:54642). Sep 9 07:00:12.885228 sshd[5603]: Accepted publickey for core from 139.178.68.195 port 54642 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:12.887621 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:12.901175 systemd-logind[1558]: New session 14 of user core. Sep 9 07:00:12.909975 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 07:00:13.767516 sshd[5608]: Connection closed by 139.178.68.195 port 54642 Sep 9 07:00:13.772315 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:13.780668 systemd[1]: sshd@15-10.230.42.222:22-139.178.68.195:54642.service: Deactivated successfully. Sep 9 07:00:13.785811 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 07:00:13.790101 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Sep 9 07:00:13.792831 systemd-logind[1558]: Removed session 14. Sep 9 07:00:16.618079 containerd[1614]: time="2025-09-09T07:00:16.616726267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"a1c982039c00c6d70a596bfea34611a19a0eba7883bd2061a265328623411481\" pid:5632 exited_at:{seconds:1757401216 nanos:616330706}" Sep 9 07:00:18.930659 systemd[1]: Started sshd@16-10.230.42.222:22-139.178.68.195:54650.service - OpenSSH per-connection server daemon (139.178.68.195:54650). Sep 9 07:00:19.380578 containerd[1614]: time="2025-09-09T07:00:19.378943330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"4de2ac61f2c1d42e94553f711b2d736ae05d652c7a459bf48980fd15eb0decad\" pid:5658 exited_at:{seconds:1757401219 nanos:377826606}" Sep 9 07:00:19.947601 sshd[5644]: Accepted publickey for core from 139.178.68.195 port 54650 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:19.952389 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:19.963584 systemd-logind[1558]: New session 15 of user core. Sep 9 07:00:19.968243 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 07:00:20.871508 sshd[5674]: Connection closed by 139.178.68.195 port 54650 Sep 9 07:00:20.871129 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:20.880834 systemd[1]: sshd@16-10.230.42.222:22-139.178.68.195:54650.service: Deactivated successfully. Sep 9 07:00:20.886663 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 07:00:20.891057 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Sep 9 07:00:20.894603 systemd-logind[1558]: Removed session 15. Sep 9 07:00:22.848093 containerd[1614]: time="2025-09-09T07:00:22.847662862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"104911691332af4eb4cdf2f146d4e18a44c54a384411d82139fbb3e94d5fc182\" pid:5699 exited_at:{seconds:1757401222 nanos:847235526}" Sep 9 07:00:26.029646 systemd[1]: Started sshd@17-10.230.42.222:22-139.178.68.195:42818.service - OpenSSH per-connection server daemon (139.178.68.195:42818). Sep 9 07:00:27.111058 sshd[5709]: Accepted publickey for core from 139.178.68.195 port 42818 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:27.112571 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:27.123702 systemd-logind[1558]: New session 16 of user core. Sep 9 07:00:27.133409 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 07:00:28.144137 sshd[5714]: Connection closed by 139.178.68.195 port 42818 Sep 9 07:00:28.148558 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:28.164650 systemd[1]: sshd@17-10.230.42.222:22-139.178.68.195:42818.service: Deactivated successfully. Sep 9 07:00:28.170555 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 07:00:28.175206 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Sep 9 07:00:28.179355 systemd-logind[1558]: Removed session 16. Sep 9 07:00:28.297963 systemd[1]: Started sshd@18-10.230.42.222:22-139.178.68.195:42830.service - OpenSSH per-connection server daemon (139.178.68.195:42830). Sep 9 07:00:29.218093 sshd[5727]: Accepted publickey for core from 139.178.68.195 port 42830 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:29.222522 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:29.231549 systemd-logind[1558]: New session 17 of user core. Sep 9 07:00:29.236464 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 07:00:30.095085 sshd[5730]: Connection closed by 139.178.68.195 port 42830 Sep 9 07:00:30.098887 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:30.120882 systemd[1]: sshd@18-10.230.42.222:22-139.178.68.195:42830.service: Deactivated successfully. Sep 9 07:00:30.127739 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 07:00:30.130695 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Sep 9 07:00:30.137525 systemd-logind[1558]: Removed session 17. Sep 9 07:00:30.254451 systemd[1]: Started sshd@19-10.230.42.222:22-139.178.68.195:50788.service - OpenSSH per-connection server daemon (139.178.68.195:50788). Sep 9 07:00:31.242745 sshd[5740]: Accepted publickey for core from 139.178.68.195 port 50788 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:31.247691 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:31.258089 systemd-logind[1558]: New session 18 of user core. Sep 9 07:00:31.261311 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 07:00:32.483810 sshd[5743]: Connection closed by 139.178.68.195 port 50788 Sep 9 07:00:32.484582 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:32.499641 systemd[1]: sshd@19-10.230.42.222:22-139.178.68.195:50788.service: Deactivated successfully. Sep 9 07:00:32.504642 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 07:00:32.507218 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Sep 9 07:00:32.510716 systemd-logind[1558]: Removed session 18. Sep 9 07:00:37.648860 systemd[1]: Started sshd@20-10.230.42.222:22-139.178.68.195:50802.service - OpenSSH per-connection server daemon (139.178.68.195:50802). Sep 9 07:00:38.605079 sshd[5761]: Accepted publickey for core from 139.178.68.195 port 50802 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:38.609267 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:38.618807 systemd-logind[1558]: New session 19 of user core. Sep 9 07:00:38.627310 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 07:00:39.374130 sshd[5764]: Connection closed by 139.178.68.195 port 50802 Sep 9 07:00:39.375414 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:39.382744 systemd[1]: sshd@20-10.230.42.222:22-139.178.68.195:50802.service: Deactivated successfully. Sep 9 07:00:39.390011 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 07:00:39.394888 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Sep 9 07:00:39.399666 systemd-logind[1558]: Removed session 19. Sep 9 07:00:39.506704 containerd[1614]: time="2025-09-09T07:00:39.506639949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"afe11817c92a6834c7402379e1e07a16e48504015988985ddf018f91088e2912\" pid:5787 exited_at:{seconds:1757401239 nanos:491621280}" Sep 9 07:00:41.032922 systemd[1]: Started sshd@21-10.230.42.222:22-123.58.213.127:38384.service - OpenSSH per-connection server daemon (123.58.213.127:38384). Sep 9 07:00:42.354077 sshd[5798]: Received disconnect from 123.58.213.127 port 38384:11: Bye Bye [preauth] Sep 9 07:00:42.354077 sshd[5798]: Disconnected from authenticating user root 123.58.213.127 port 38384 [preauth] Sep 9 07:00:42.359698 systemd[1]: sshd@21-10.230.42.222:22-123.58.213.127:38384.service: Deactivated successfully. Sep 9 07:00:44.534377 systemd[1]: Started sshd@22-10.230.42.222:22-139.178.68.195:53412.service - OpenSSH per-connection server daemon (139.178.68.195:53412). Sep 9 07:00:45.479443 sshd[5810]: Accepted publickey for core from 139.178.68.195 port 53412 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:45.482633 sshd-session[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:45.492300 systemd-logind[1558]: New session 20 of user core. Sep 9 07:00:45.498389 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 07:00:46.280773 sshd[5813]: Connection closed by 139.178.68.195 port 53412 Sep 9 07:00:46.280619 sshd-session[5810]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:46.289234 systemd[1]: sshd@22-10.230.42.222:22-139.178.68.195:53412.service: Deactivated successfully. Sep 9 07:00:46.293335 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 07:00:46.295914 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Sep 9 07:00:46.298938 systemd-logind[1558]: Removed session 20. Sep 9 07:00:46.824666 containerd[1614]: time="2025-09-09T07:00:46.823968704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"4030f26cdfc09fd1c405be66e3cf2c6af9076f14250574175473c0bc87ab1b50\" pid:5838 exited_at:{seconds:1757401246 nanos:822852369}" Sep 9 07:00:49.326809 containerd[1614]: time="2025-09-09T07:00:49.324485978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"b381473e78e21ddcbe8f1e72a8f1286b759a6656c99e3b922ae200f9ede503ca\" pid:5862 exited_at:{seconds:1757401249 nanos:319167555}" Sep 9 07:00:51.451930 systemd[1]: Started sshd@23-10.230.42.222:22-139.178.68.195:46344.service - OpenSSH per-connection server daemon (139.178.68.195:46344). Sep 9 07:00:52.514196 sshd[5874]: Accepted publickey for core from 139.178.68.195 port 46344 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:52.516377 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:52.529913 systemd-logind[1558]: New session 21 of user core. Sep 9 07:00:52.537318 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 07:00:52.751895 containerd[1614]: time="2025-09-09T07:00:52.751828110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"e72ffc2f712fac276a3b53c70450cbe666cee3615f6547220a8de94ad3f12aff\" pid:5906 exited_at:{seconds:1757401252 nanos:750809930}" Sep 9 07:00:53.783991 sshd[5892]: Connection closed by 139.178.68.195 port 46344 Sep 9 07:00:53.785584 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Sep 9 07:00:53.792273 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Sep 9 07:00:53.793943 systemd[1]: sshd@23-10.230.42.222:22-139.178.68.195:46344.service: Deactivated successfully. Sep 9 07:00:53.798617 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 07:00:53.805745 systemd-logind[1558]: Removed session 21. Sep 9 07:00:58.935968 systemd[1]: Started sshd@24-10.230.42.222:22-139.178.68.195:46350.service - OpenSSH per-connection server daemon (139.178.68.195:46350). Sep 9 07:00:59.874524 sshd[5934]: Accepted publickey for core from 139.178.68.195 port 46350 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:00:59.877846 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:00:59.886299 systemd-logind[1558]: New session 22 of user core. Sep 9 07:00:59.900280 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 07:01:00.696076 sshd[5937]: Connection closed by 139.178.68.195 port 46350 Sep 9 07:01:00.697449 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:00.705621 systemd[1]: sshd@24-10.230.42.222:22-139.178.68.195:46350.service: Deactivated successfully. Sep 9 07:01:00.710812 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 07:01:00.715138 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Sep 9 07:01:00.716961 systemd-logind[1558]: Removed session 22. Sep 9 07:01:05.900024 systemd[1]: Started sshd@25-10.230.42.222:22-139.178.68.195:38384.service - OpenSSH per-connection server daemon (139.178.68.195:38384). Sep 9 07:01:07.010993 sshd[5952]: Accepted publickey for core from 139.178.68.195 port 38384 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:07.013432 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:07.029270 systemd-logind[1558]: New session 23 of user core. Sep 9 07:01:07.035552 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 07:01:07.212157 containerd[1614]: time="2025-09-09T07:01:07.199798024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"9fd8bbc1912359d9e2c0e078139808209d81176f7ede40c6c95699d11d5cafad\" pid:5967 exited_at:{seconds:1757401267 nanos:198777649}" Sep 9 07:01:07.963952 sshd[5973]: Connection closed by 139.178.68.195 port 38384 Sep 9 07:01:07.966987 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:07.975456 systemd[1]: sshd@25-10.230.42.222:22-139.178.68.195:38384.service: Deactivated successfully. Sep 9 07:01:07.980460 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 07:01:07.982418 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Sep 9 07:01:07.985708 systemd-logind[1558]: Removed session 23. Sep 9 07:01:08.131165 systemd[1]: Started sshd@26-10.230.42.222:22-139.178.68.195:38388.service - OpenSSH per-connection server daemon (139.178.68.195:38388). Sep 9 07:01:09.099272 sshd[5990]: Accepted publickey for core from 139.178.68.195 port 38388 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:09.102504 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:09.110876 systemd-logind[1558]: New session 24 of user core. Sep 9 07:01:09.120600 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 07:01:10.226591 sshd[5993]: Connection closed by 139.178.68.195 port 38388 Sep 9 07:01:10.227410 sshd-session[5990]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:10.233909 systemd[1]: sshd@26-10.230.42.222:22-139.178.68.195:38388.service: Deactivated successfully. Sep 9 07:01:10.237761 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 07:01:10.239843 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Sep 9 07:01:10.242219 systemd-logind[1558]: Removed session 24. Sep 9 07:01:10.384901 systemd[1]: Started sshd@27-10.230.42.222:22-139.178.68.195:42552.service - OpenSSH per-connection server daemon (139.178.68.195:42552). Sep 9 07:01:11.477518 sshd[6003]: Accepted publickey for core from 139.178.68.195 port 42552 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:11.479824 sshd-session[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:11.491321 systemd-logind[1558]: New session 25 of user core. Sep 9 07:01:11.498527 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 07:01:13.421826 sshd[6006]: Connection closed by 139.178.68.195 port 42552 Sep 9 07:01:13.422500 sshd-session[6003]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:13.433611 systemd[1]: sshd@27-10.230.42.222:22-139.178.68.195:42552.service: Deactivated successfully. Sep 9 07:01:13.442504 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 07:01:13.451284 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Sep 9 07:01:13.454202 systemd-logind[1558]: Removed session 25. Sep 9 07:01:13.597218 systemd[1]: Started sshd@28-10.230.42.222:22-139.178.68.195:42554.service - OpenSSH per-connection server daemon (139.178.68.195:42554). Sep 9 07:01:14.696482 sshd[6023]: Accepted publickey for core from 139.178.68.195 port 42554 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:14.698503 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:14.708313 systemd-logind[1558]: New session 26 of user core. Sep 9 07:01:14.718275 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 07:01:16.295719 sshd[6026]: Connection closed by 139.178.68.195 port 42554 Sep 9 07:01:16.296270 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:16.312851 systemd[1]: sshd@28-10.230.42.222:22-139.178.68.195:42554.service: Deactivated successfully. Sep 9 07:01:16.323160 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 07:01:16.327429 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Sep 9 07:01:16.329672 systemd-logind[1558]: Removed session 26. Sep 9 07:01:16.467212 systemd[1]: Started sshd@29-10.230.42.222:22-139.178.68.195:42568.service - OpenSSH per-connection server daemon (139.178.68.195:42568). Sep 9 07:01:17.011083 containerd[1614]: time="2025-09-09T07:01:17.009942592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"02702409b45b3316945996645878fdd1d6f3ed6fb648aedc3e092c3b643f2e24\" pid:6051 exited_at:{seconds:1757401276 nanos:966508576}" Sep 9 07:01:17.495722 sshd[6036]: Accepted publickey for core from 139.178.68.195 port 42568 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:17.499505 sshd-session[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:17.509182 systemd-logind[1558]: New session 27 of user core. Sep 9 07:01:17.518293 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 07:01:18.713768 sshd[6062]: Connection closed by 139.178.68.195 port 42568 Sep 9 07:01:18.715979 sshd-session[6036]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:18.726131 systemd[1]: sshd@29-10.230.42.222:22-139.178.68.195:42568.service: Deactivated successfully. Sep 9 07:01:18.732556 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 07:01:18.736029 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Sep 9 07:01:18.739643 systemd-logind[1558]: Removed session 27. Sep 9 07:01:19.408224 containerd[1614]: time="2025-09-09T07:01:19.408151967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"4c3ac5bbd6dfffd0848ed457ec9f10ea888eddf2f154a991e5b85ce7b59f706c\" pid:6085 exited_at:{seconds:1757401279 nanos:407243408}" Sep 9 07:01:22.719343 containerd[1614]: time="2025-09-09T07:01:22.719263748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"156a35504d39d38975388ff2a29e9e479ee8d955b05c126fbe70872ea9cd645a\" pid:6110 exited_at:{seconds:1757401282 nanos:718792466}" Sep 9 07:01:23.917103 systemd[1]: Started sshd@30-10.230.42.222:22-139.178.68.195:34498.service - OpenSSH per-connection server daemon (139.178.68.195:34498). Sep 9 07:01:25.052272 sshd[6119]: Accepted publickey for core from 139.178.68.195 port 34498 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:25.055899 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:25.066558 systemd-logind[1558]: New session 28 of user core. Sep 9 07:01:25.076217 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 07:01:26.298755 sshd[6122]: Connection closed by 139.178.68.195 port 34498 Sep 9 07:01:26.300297 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:26.308870 systemd[1]: sshd@30-10.230.42.222:22-139.178.68.195:34498.service: Deactivated successfully. Sep 9 07:01:26.309699 systemd-logind[1558]: Session 28 logged out. Waiting for processes to exit. Sep 9 07:01:26.312065 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 07:01:26.317293 systemd-logind[1558]: Removed session 28. Sep 9 07:01:31.459477 systemd[1]: Started sshd@31-10.230.42.222:22-139.178.68.195:45626.service - OpenSSH per-connection server daemon (139.178.68.195:45626). Sep 9 07:01:32.437581 sshd[6136]: Accepted publickey for core from 139.178.68.195 port 45626 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:32.439846 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:32.451468 systemd-logind[1558]: New session 29 of user core. Sep 9 07:01:32.462528 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 07:01:33.345104 sshd[6139]: Connection closed by 139.178.68.195 port 45626 Sep 9 07:01:33.345558 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:33.355908 systemd[1]: sshd@31-10.230.42.222:22-139.178.68.195:45626.service: Deactivated successfully. Sep 9 07:01:33.361577 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 07:01:33.364136 systemd-logind[1558]: Session 29 logged out. Waiting for processes to exit. Sep 9 07:01:33.369081 systemd-logind[1558]: Removed session 29. Sep 9 07:01:38.491355 systemd[1]: Started sshd@32-10.230.42.222:22-139.178.68.195:45628.service - OpenSSH per-connection server daemon (139.178.68.195:45628). Sep 9 07:01:39.489706 sshd[6158]: Accepted publickey for core from 139.178.68.195 port 45628 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:39.492990 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:39.507326 systemd-logind[1558]: New session 30 of user core. Sep 9 07:01:39.515315 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 07:01:39.535453 containerd[1614]: time="2025-09-09T07:01:39.535348950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"9bf213faec79ac22c97fcba6aa6afd433ac98c5f479de4e22294feff6084ba50\" pid:6175 exited_at:{seconds:1757401299 nanos:528837876}" Sep 9 07:01:40.572889 sshd[6181]: Connection closed by 139.178.68.195 port 45628 Sep 9 07:01:40.575668 sshd-session[6158]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:40.585838 systemd-logind[1558]: Session 30 logged out. Waiting for processes to exit. Sep 9 07:01:40.589886 systemd[1]: sshd@32-10.230.42.222:22-139.178.68.195:45628.service: Deactivated successfully. Sep 9 07:01:40.593795 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 07:01:40.597459 systemd-logind[1558]: Removed session 30. Sep 9 07:01:45.748427 systemd[1]: Started sshd@33-10.230.42.222:22-139.178.68.195:38328.service - OpenSSH per-connection server daemon (139.178.68.195:38328). Sep 9 07:01:46.101350 systemd[1]: Started sshd@34-10.230.42.222:22-117.220.10.3:55782.service - OpenSSH per-connection server daemon (117.220.10.3:55782). Sep 9 07:01:46.717820 sshd[6198]: Accepted publickey for core from 139.178.68.195 port 38328 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:46.724582 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:46.742738 systemd-logind[1558]: New session 31 of user core. Sep 9 07:01:46.750695 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 07:01:46.887917 containerd[1614]: time="2025-09-09T07:01:46.887686790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45345b353b281249bd2d4aeb92ad658263970c4b8c96add9bbc9f961336115db\" id:\"dce564f96b49e559224359a0077dd94ba43f3fa00aa3722c6476a01e481d19d2\" pid:6217 exited_at:{seconds:1757401306 nanos:886381338}" Sep 9 07:01:47.686107 sshd[6227]: Connection closed by 139.178.68.195 port 38328 Sep 9 07:01:47.686511 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:47.695875 systemd[1]: sshd@33-10.230.42.222:22-139.178.68.195:38328.service: Deactivated successfully. Sep 9 07:01:47.702531 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 07:01:47.706819 systemd-logind[1558]: Session 31 logged out. Waiting for processes to exit. Sep 9 07:01:47.712320 systemd-logind[1558]: Removed session 31. Sep 9 07:01:48.048174 sshd[6202]: Received disconnect from 117.220.10.3 port 55782:11: Bye Bye [preauth] Sep 9 07:01:48.048174 sshd[6202]: Disconnected from authenticating user root 117.220.10.3 port 55782 [preauth] Sep 9 07:01:48.051852 systemd[1]: sshd@34-10.230.42.222:22-117.220.10.3:55782.service: Deactivated successfully. Sep 9 07:01:49.534751 containerd[1614]: time="2025-09-09T07:01:49.532388876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07e90d3d193910f7825a4b20ce1bbdf0d0df92dc9d8c03eb7a96a46e72c428e7\" id:\"2f4c97a312cd36b10ce26ee334ad27ac26cabb906e1c2c77d30d988e573afa11\" pid:6256 exited_at:{seconds:1757401309 nanos:531585999}" Sep 9 07:01:51.077434 systemd[1]: Started sshd@35-10.230.42.222:22-123.58.213.127:32826.service - OpenSSH per-connection server daemon (123.58.213.127:32826). Sep 9 07:01:52.461095 sshd[6269]: Received disconnect from 123.58.213.127 port 32826:11: Bye Bye [preauth] Sep 9 07:01:52.461095 sshd[6269]: Disconnected from authenticating user root 123.58.213.127 port 32826 [preauth] Sep 9 07:01:52.472132 systemd[1]: sshd@35-10.230.42.222:22-123.58.213.127:32826.service: Deactivated successfully. Sep 9 07:01:52.725717 containerd[1614]: time="2025-09-09T07:01:52.725549129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff929d2fbeb73cfc7e52df8856dfc2305801f4a4c1e1752f406d8d341c61a8c5\" id:\"93de78b13ca313ac4eab58fcf3d54488778d4270d184dca6fbf755a26998f907\" pid:6285 exited_at:{seconds:1757401312 nanos:725076576}" Sep 9 07:01:52.844756 systemd[1]: Started sshd@36-10.230.42.222:22-139.178.68.195:54016.service - OpenSSH per-connection server daemon (139.178.68.195:54016). Sep 9 07:01:53.816289 sshd[6295]: Accepted publickey for core from 139.178.68.195 port 54016 ssh2: RSA SHA256:hnIpA/ZJ23IRtIjfbbVXecd5gy9f+8R3mpLy9pXTs7A Sep 9 07:01:53.824220 sshd-session[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 07:01:53.832205 systemd-logind[1558]: New session 32 of user core. Sep 9 07:01:53.842326 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 9 07:01:55.259445 sshd[6298]: Connection closed by 139.178.68.195 port 54016 Sep 9 07:01:55.261635 sshd-session[6295]: pam_unix(sshd:session): session closed for user core Sep 9 07:01:55.277664 systemd[1]: sshd@36-10.230.42.222:22-139.178.68.195:54016.service: Deactivated successfully. Sep 9 07:01:55.281026 systemd[1]: session-32.scope: Deactivated successfully. Sep 9 07:01:55.283887 systemd-logind[1558]: Session 32 logged out. Waiting for processes to exit. Sep 9 07:01:55.286224 systemd-logind[1558]: Removed session 32.