Sep 13 02:40:05.970735 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:15:39 -00 2025 Sep 13 02:40:05.970793 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 02:40:05.970813 kernel: BIOS-provided physical RAM map: Sep 13 02:40:05.970823 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 02:40:05.970833 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 02:40:05.970844 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 02:40:05.970863 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 13 02:40:05.970879 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 13 02:40:05.970890 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 02:40:05.970900 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 02:40:05.970917 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 02:40:05.970928 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 02:40:05.970938 kernel: NX (Execute Disable) protection: active Sep 13 02:40:05.970949 kernel: APIC: Static calls initialized Sep 13 02:40:05.970961 kernel: SMBIOS 2.8 present. Sep 13 02:40:05.970978 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 13 02:40:05.970990 kernel: DMI: Memory slots populated: 1/1 Sep 13 02:40:05.971001 kernel: Hypervisor detected: KVM Sep 13 02:40:05.971013 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 02:40:05.971024 kernel: kvm-clock: using sched offset of 5698409806 cycles Sep 13 02:40:05.971036 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 02:40:05.971048 kernel: tsc: Detected 2499.998 MHz processor Sep 13 02:40:05.971060 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 02:40:05.971072 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 02:40:05.971083 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 13 02:40:05.971100 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 02:40:05.971111 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 02:40:05.971123 kernel: Using GB pages for direct mapping Sep 13 02:40:05.971134 kernel: ACPI: Early table checksum verification disabled Sep 13 02:40:05.971146 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 13 02:40:05.971157 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971169 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971180 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971192 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 13 02:40:05.971208 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971219 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971231 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971242 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 02:40:05.971254 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 13 02:40:05.971266 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 13 02:40:05.971283 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 13 02:40:05.971340 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 13 02:40:05.971353 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 13 02:40:05.971365 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 13 02:40:05.971377 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 13 02:40:05.971389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 02:40:05.971401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 02:40:05.971413 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 13 02:40:05.971432 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Sep 13 02:40:05.971444 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Sep 13 02:40:05.971457 kernel: Zone ranges: Sep 13 02:40:05.971469 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 02:40:05.971481 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 13 02:40:05.971493 kernel: Normal empty Sep 13 02:40:05.971505 kernel: Device empty Sep 13 02:40:05.971517 kernel: Movable zone start for each node Sep 13 02:40:05.971528 kernel: Early memory node ranges Sep 13 02:40:05.971545 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 02:40:05.971558 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 13 02:40:05.971570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 13 02:40:05.971582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 02:40:05.971594 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 02:40:05.971606 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 13 02:40:05.971618 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 02:40:05.971630 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 02:40:05.971642 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 02:40:05.971659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 02:40:05.971671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 02:40:05.971683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 02:40:05.971695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 02:40:05.971707 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 02:40:05.971719 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 02:40:05.971745 kernel: TSC deadline timer available Sep 13 02:40:05.971757 kernel: CPU topo: Max. logical packages: 16 Sep 13 02:40:05.971769 kernel: CPU topo: Max. logical dies: 16 Sep 13 02:40:05.971787 kernel: CPU topo: Max. dies per package: 1 Sep 13 02:40:05.971825 kernel: CPU topo: Max. threads per core: 1 Sep 13 02:40:05.971838 kernel: CPU topo: Num. cores per package: 1 Sep 13 02:40:05.971850 kernel: CPU topo: Num. threads per package: 1 Sep 13 02:40:05.971863 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Sep 13 02:40:05.971875 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 02:40:05.971887 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 02:40:05.971899 kernel: Booting paravirtualized kernel on KVM Sep 13 02:40:05.971911 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 02:40:05.971923 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 13 02:40:05.971941 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 13 02:40:05.971953 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 13 02:40:05.971965 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 02:40:05.971977 kernel: kvm-guest: PV spinlocks enabled Sep 13 02:40:05.971989 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 02:40:05.972003 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 02:40:05.972015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 02:40:05.972027 kernel: random: crng init done Sep 13 02:40:05.972044 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 02:40:05.972056 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 02:40:05.972068 kernel: Fallback order for Node 0: 0 Sep 13 02:40:05.972080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Sep 13 02:40:05.972092 kernel: Policy zone: DMA32 Sep 13 02:40:05.972104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 02:40:05.972116 kernel: software IO TLB: area num 16. Sep 13 02:40:05.972128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 02:40:05.972140 kernel: Kernel/User page tables isolation: enabled Sep 13 02:40:05.972157 kernel: ftrace: allocating 40122 entries in 157 pages Sep 13 02:40:05.972169 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 02:40:05.972181 kernel: Dynamic Preempt: voluntary Sep 13 02:40:05.972193 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 02:40:05.972207 kernel: rcu: RCU event tracing is enabled. Sep 13 02:40:05.972219 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 02:40:05.972231 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 02:40:05.972243 kernel: Rude variant of Tasks RCU enabled. Sep 13 02:40:05.972255 kernel: Tracing variant of Tasks RCU enabled. Sep 13 02:40:05.972272 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 02:40:05.972285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 02:40:05.978830 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 02:40:05.978848 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 02:40:05.978862 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 02:40:05.978874 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 13 02:40:05.978887 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 02:40:05.978922 kernel: Console: colour VGA+ 80x25 Sep 13 02:40:05.978936 kernel: printk: legacy console [tty0] enabled Sep 13 02:40:05.978949 kernel: printk: legacy console [ttyS0] enabled Sep 13 02:40:05.978962 kernel: ACPI: Core revision 20240827 Sep 13 02:40:05.978975 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 02:40:05.978992 kernel: x2apic enabled Sep 13 02:40:05.979005 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 02:40:05.979019 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 02:40:05.979033 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 13 02:40:05.979051 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 02:40:05.979064 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 02:40:05.979076 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 02:40:05.979089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 02:40:05.979102 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 02:40:05.979114 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 02:40:05.979127 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 02:40:05.979140 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 02:40:05.979152 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 02:40:05.979165 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 02:40:05.979177 kernel: MMIO Stale Data: Unknown: No mitigations Sep 13 02:40:05.979194 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 02:40:05.979207 kernel: active return thunk: its_return_thunk Sep 13 02:40:05.979219 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 02:40:05.979232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 02:40:05.979245 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 02:40:05.979258 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 02:40:05.979270 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 02:40:05.979283 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 02:40:05.979322 kernel: Freeing SMP alternatives memory: 32K Sep 13 02:40:05.979340 kernel: pid_max: default: 32768 minimum: 301 Sep 13 02:40:05.979356 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 02:40:05.979380 kernel: landlock: Up and running. Sep 13 02:40:05.979393 kernel: SELinux: Initializing. Sep 13 02:40:05.979406 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 02:40:05.979419 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 02:40:05.979432 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 13 02:40:05.979444 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 13 02:40:05.979463 kernel: signal: max sigframe size: 1776 Sep 13 02:40:05.979476 kernel: rcu: Hierarchical SRCU implementation. Sep 13 02:40:05.979490 kernel: rcu: Max phase no-delay instances is 400. Sep 13 02:40:05.979503 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Sep 13 02:40:05.979522 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 02:40:05.979535 kernel: smp: Bringing up secondary CPUs ... Sep 13 02:40:05.979548 kernel: smpboot: x86: Booting SMP configuration: Sep 13 02:40:05.979561 kernel: .... node #0, CPUs: #1 Sep 13 02:40:05.979573 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 02:40:05.979586 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 13 02:40:05.979600 kernel: Memory: 1897732K/2096616K available (14336K kernel code, 2432K rwdata, 9960K rodata, 53828K init, 1088K bss, 192876K reserved, 0K cma-reserved) Sep 13 02:40:05.979613 kernel: devtmpfs: initialized Sep 13 02:40:05.979626 kernel: x86/mm: Memory block size: 128MB Sep 13 02:40:05.979644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 02:40:05.979657 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 02:40:05.979671 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 02:40:05.979684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 02:40:05.979696 kernel: audit: initializing netlink subsys (disabled) Sep 13 02:40:05.979709 kernel: audit: type=2000 audit(1757731202.207:1): state=initialized audit_enabled=0 res=1 Sep 13 02:40:05.979733 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 02:40:05.979747 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 02:40:05.979760 kernel: cpuidle: using governor menu Sep 13 02:40:05.979778 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 02:40:05.979791 kernel: dca service started, version 1.12.1 Sep 13 02:40:05.979804 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 13 02:40:05.979818 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 02:40:05.979831 kernel: PCI: Using configuration type 1 for base access Sep 13 02:40:05.979843 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 02:40:05.979856 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 02:40:05.979869 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 02:40:05.979882 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 02:40:05.979900 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 02:40:05.979913 kernel: ACPI: Added _OSI(Module Device) Sep 13 02:40:05.979926 kernel: ACPI: Added _OSI(Processor Device) Sep 13 02:40:05.979939 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 02:40:05.979952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 02:40:05.979964 kernel: ACPI: Interpreter enabled Sep 13 02:40:05.979977 kernel: ACPI: PM: (supports S0 S5) Sep 13 02:40:05.979990 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 02:40:05.980003 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 02:40:05.980021 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 02:40:05.980034 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 02:40:05.980047 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 02:40:05.980466 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 02:40:05.980636 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 02:40:05.980811 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 02:40:05.980832 kernel: PCI host bridge to bus 0000:00 Sep 13 02:40:05.981024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 02:40:05.981170 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 02:40:05.981358 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 02:40:05.981507 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 13 02:40:05.981647 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 02:40:05.981806 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 13 02:40:05.981950 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 02:40:05.982161 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 02:40:05.982384 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Sep 13 02:40:05.982548 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Sep 13 02:40:05.982706 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Sep 13 02:40:05.982878 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Sep 13 02:40:05.983036 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 02:40:05.983225 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.983430 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Sep 13 02:40:05.983590 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 02:40:05.983761 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 13 02:40:05.983921 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 02:40:05.984107 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.984268 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Sep 13 02:40:05.984495 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 02:40:05.984654 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 02:40:05.984828 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 02:40:05.985008 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.985182 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Sep 13 02:40:05.990492 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 02:40:05.990700 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 02:40:05.990895 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 02:40:05.991074 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.991237 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Sep 13 02:40:05.992126 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 02:40:05.992336 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 02:40:05.992505 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 02:40:05.992680 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.992865 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Sep 13 02:40:05.993026 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 02:40:05.993184 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 02:40:05.994027 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 02:40:05.994221 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.994568 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Sep 13 02:40:05.994745 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 02:40:05.994906 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 02:40:05.995074 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 02:40:05.995244 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:05.999609 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Sep 13 02:40:05.999818 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 02:40:05.999981 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 02:40:06.000140 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 02:40:06.000367 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 13 02:40:06.000531 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Sep 13 02:40:06.000689 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 02:40:06.000863 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 02:40:06.001020 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 02:40:06.001193 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 02:40:06.001419 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Sep 13 02:40:06.001588 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Sep 13 02:40:06.001759 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 02:40:06.001919 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Sep 13 02:40:06.002091 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 02:40:06.002251 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Sep 13 02:40:06.004136 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Sep 13 02:40:06.004343 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Sep 13 02:40:06.004538 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 02:40:06.004699 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 02:40:06.004890 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 02:40:06.005050 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Sep 13 02:40:06.005207 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Sep 13 02:40:06.007439 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 02:40:06.007622 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 13 02:40:06.007823 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Sep 13 02:40:06.007998 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Sep 13 02:40:06.008164 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 02:40:06.008349 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 02:40:06.008512 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 02:40:06.008695 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 02:40:06.008903 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Sep 13 02:40:06.009083 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Sep 13 02:40:06.009247 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 02:40:06.010347 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 13 02:40:06.010517 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Sep 13 02:40:06.010706 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 02:40:06.011030 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 13 02:40:06.011250 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 02:40:06.012493 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 02:40:06.012667 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 02:40:06.012852 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 02:40:06.013020 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 02:40:06.013186 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 02:40:06.013381 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 02:40:06.013404 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 02:40:06.013418 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 02:40:06.013431 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 02:40:06.013445 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 02:40:06.013458 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 02:40:06.013471 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 02:40:06.013484 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 02:40:06.013497 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 02:40:06.013518 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 02:40:06.013531 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 02:40:06.013544 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 02:40:06.013557 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 02:40:06.013570 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 02:40:06.013582 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 02:40:06.013595 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 02:40:06.013608 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 02:40:06.013621 kernel: iommu: Default domain type: Translated Sep 13 02:40:06.013639 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 02:40:06.013652 kernel: PCI: Using ACPI for IRQ routing Sep 13 02:40:06.013665 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 02:40:06.013678 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 02:40:06.013691 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 13 02:40:06.013865 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 02:40:06.014025 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 02:40:06.014183 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 02:40:06.014203 kernel: vgaarb: loaded Sep 13 02:40:06.014223 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 02:40:06.014237 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 02:40:06.014250 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 02:40:06.014263 kernel: pnp: PnP ACPI init Sep 13 02:40:06.015492 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 02:40:06.015518 kernel: pnp: PnP ACPI: found 5 devices Sep 13 02:40:06.015532 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 02:40:06.015546 kernel: NET: Registered PF_INET protocol family Sep 13 02:40:06.015568 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 02:40:06.015581 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 02:40:06.015594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 02:40:06.015608 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 02:40:06.015621 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 02:40:06.015635 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 02:40:06.015648 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 02:40:06.015661 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 02:40:06.015679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 02:40:06.015692 kernel: NET: Registered PF_XDP protocol family Sep 13 02:40:06.015872 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 13 02:40:06.016039 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 02:40:06.016202 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 02:40:06.016389 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 02:40:06.016552 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 02:40:06.016713 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 02:40:06.016890 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 02:40:06.018546 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 02:40:06.018734 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 13 02:40:06.018906 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 13 02:40:06.019070 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 13 02:40:06.019232 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 13 02:40:06.020254 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 13 02:40:06.020456 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 13 02:40:06.020633 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 13 02:40:06.020813 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 13 02:40:06.020995 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 02:40:06.021382 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 02:40:06.023349 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 02:40:06.023591 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 02:40:06.023778 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 13 02:40:06.023975 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 02:40:06.024142 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 02:40:06.024330 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 02:40:06.024493 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 02:40:06.024653 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 02:40:06.024834 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 02:40:06.024995 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 02:40:06.025154 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 02:40:06.028041 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 02:40:06.028241 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 02:40:06.028428 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 02:40:06.028591 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 02:40:06.028768 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 02:40:06.028944 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 02:40:06.029105 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 02:40:06.029264 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 02:40:06.030559 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 02:40:06.030911 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 02:40:06.031111 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 02:40:06.031279 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 02:40:06.031473 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 02:40:06.031644 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 02:40:06.031824 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 02:40:06.032001 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 02:40:06.032163 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 02:40:06.032459 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 02:40:06.032627 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 02:40:06.032806 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 02:40:06.032968 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 02:40:06.033125 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 02:40:06.033271 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 02:40:06.033442 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 02:40:06.033586 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 13 02:40:06.033744 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 02:40:06.033906 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 13 02:40:06.034079 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 02:40:06.034231 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 13 02:40:06.034398 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 02:40:06.034572 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 02:40:06.034752 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 13 02:40:06.037406 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 02:40:06.037573 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 02:40:06.037755 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 13 02:40:06.037909 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 02:40:06.038060 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 02:40:06.038235 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 13 02:40:06.040023 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 02:40:06.040190 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 02:40:06.040392 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 13 02:40:06.040550 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 02:40:06.040703 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 02:40:06.040885 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 13 02:40:06.041049 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 02:40:06.041200 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 02:40:06.043265 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 13 02:40:06.043459 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 02:40:06.043615 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 02:40:06.043803 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 13 02:40:06.043957 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 02:40:06.044117 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 02:40:06.044140 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 02:40:06.044155 kernel: PCI: CLS 0 bytes, default 64 Sep 13 02:40:06.044169 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 02:40:06.044183 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 13 02:40:06.044197 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 02:40:06.044211 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 02:40:06.044224 kernel: Initialise system trusted keyrings Sep 13 02:40:06.044249 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 02:40:06.044263 kernel: Key type asymmetric registered Sep 13 02:40:06.044277 kernel: Asymmetric key parser 'x509' registered Sep 13 02:40:06.044312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 02:40:06.044328 kernel: io scheduler mq-deadline registered Sep 13 02:40:06.044342 kernel: io scheduler kyber registered Sep 13 02:40:06.044356 kernel: io scheduler bfq registered Sep 13 02:40:06.044532 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 02:40:06.044697 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 02:40:06.044885 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.045051 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 02:40:06.045213 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 02:40:06.045394 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.045559 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 02:40:06.045730 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 02:40:06.045904 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.046067 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 02:40:06.046227 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 02:40:06.046422 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.046589 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 02:40:06.046764 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 02:40:06.046936 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.047100 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 02:40:06.047260 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 02:40:06.047489 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.047654 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 02:40:06.047832 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 02:40:06.048001 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.048165 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 02:40:06.048346 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 02:40:06.048508 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 02:40:06.048529 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 02:40:06.048544 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 02:40:06.048567 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 02:40:06.048581 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 02:40:06.048595 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 02:40:06.048608 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 02:40:06.048622 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 02:40:06.048636 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 02:40:06.048651 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 02:40:06.048846 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 02:40:06.049007 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 02:40:06.049158 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T02:40:05 UTC (1757731205) Sep 13 02:40:06.049330 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 02:40:06.049352 kernel: intel_pstate: CPU model not supported Sep 13 02:40:06.049366 kernel: NET: Registered PF_INET6 protocol family Sep 13 02:40:06.049379 kernel: Segment Routing with IPv6 Sep 13 02:40:06.049393 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 02:40:06.049407 kernel: NET: Registered PF_PACKET protocol family Sep 13 02:40:06.049421 kernel: Key type dns_resolver registered Sep 13 02:40:06.049446 kernel: IPI shorthand broadcast: enabled Sep 13 02:40:06.049460 kernel: sched_clock: Marking stable (3528004173, 232493908)->(3905544384, -145046303) Sep 13 02:40:06.049474 kernel: registered taskstats version 1 Sep 13 02:40:06.049488 kernel: Loading compiled-in X.509 certificates Sep 13 02:40:06.049501 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: dd6b45f5ed9ac8d42d60bdb17f83ef06c8bcd8f6' Sep 13 02:40:06.049515 kernel: Demotion targets for Node 0: null Sep 13 02:40:06.049529 kernel: Key type .fscrypt registered Sep 13 02:40:06.049542 kernel: Key type fscrypt-provisioning registered Sep 13 02:40:06.049556 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 02:40:06.049575 kernel: ima: Allocated hash algorithm: sha1 Sep 13 02:40:06.049589 kernel: ima: No architecture policies found Sep 13 02:40:06.049602 kernel: clk: Disabling unused clocks Sep 13 02:40:06.049621 kernel: Warning: unable to open an initial console. Sep 13 02:40:06.049635 kernel: Freeing unused kernel image (initmem) memory: 53828K Sep 13 02:40:06.049649 kernel: Write protecting the kernel read-only data: 24576k Sep 13 02:40:06.049662 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 13 02:40:06.049676 kernel: Run /init as init process Sep 13 02:40:06.049694 kernel: with arguments: Sep 13 02:40:06.049713 kernel: /init Sep 13 02:40:06.049739 kernel: with environment: Sep 13 02:40:06.049753 kernel: HOME=/ Sep 13 02:40:06.049766 kernel: TERM=linux Sep 13 02:40:06.049779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 02:40:06.049804 systemd[1]: Successfully made /usr/ read-only. Sep 13 02:40:06.049824 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 02:40:06.049845 systemd[1]: Detected virtualization kvm. Sep 13 02:40:06.049860 systemd[1]: Detected architecture x86-64. Sep 13 02:40:06.049874 systemd[1]: Running in initrd. Sep 13 02:40:06.049888 systemd[1]: No hostname configured, using default hostname. Sep 13 02:40:06.049902 systemd[1]: Hostname set to . Sep 13 02:40:06.049917 systemd[1]: Initializing machine ID from VM UUID. Sep 13 02:40:06.049931 systemd[1]: Queued start job for default target initrd.target. Sep 13 02:40:06.049945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 02:40:06.049960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 02:40:06.049980 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 02:40:06.049995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 02:40:06.050010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 02:40:06.050026 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 02:40:06.050041 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 02:40:06.050056 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 02:40:06.050076 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 02:40:06.050091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 02:40:06.050105 systemd[1]: Reached target paths.target - Path Units. Sep 13 02:40:06.050120 systemd[1]: Reached target slices.target - Slice Units. Sep 13 02:40:06.050135 systemd[1]: Reached target swap.target - Swaps. Sep 13 02:40:06.050149 systemd[1]: Reached target timers.target - Timer Units. Sep 13 02:40:06.050163 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 02:40:06.050178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 02:40:06.050193 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 02:40:06.050212 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 02:40:06.050227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 02:40:06.050242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 02:40:06.050256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 02:40:06.050271 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 02:40:06.050285 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 02:40:06.050319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 02:40:06.050334 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 02:40:06.050356 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 02:40:06.050371 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 02:40:06.050385 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 02:40:06.050400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 02:40:06.050415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 02:40:06.050429 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 02:40:06.050506 systemd-journald[231]: Collecting audit messages is disabled. Sep 13 02:40:06.050542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 02:40:06.050557 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 02:40:06.050579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 02:40:06.050594 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 02:40:06.050608 kernel: Bridge firewalling registered Sep 13 02:40:06.050622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 02:40:06.050638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 02:40:06.050653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 02:40:06.050669 systemd-journald[231]: Journal started Sep 13 02:40:06.050708 systemd-journald[231]: Runtime Journal (/run/log/journal/1012e9c276854f03b4b7e5843047e810) is 4.7M, max 38.2M, 33.4M free. Sep 13 02:40:05.970927 systemd-modules-load[232]: Inserted module 'overlay' Sep 13 02:40:06.023047 systemd-modules-load[232]: Inserted module 'br_netfilter' Sep 13 02:40:06.116331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 02:40:06.120353 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 02:40:06.125837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 02:40:06.130057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 02:40:06.136429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 02:40:06.139509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 02:40:06.141755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 02:40:06.159839 systemd-tmpfiles[257]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 02:40:06.168098 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 02:40:06.170480 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 02:40:06.173749 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 02:40:06.177471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 02:40:06.219649 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 02:40:06.239948 systemd-resolved[271]: Positive Trust Anchors: Sep 13 02:40:06.239985 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 02:40:06.240030 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 02:40:06.246357 systemd-resolved[271]: Defaulting to hostname 'linux'. Sep 13 02:40:06.248477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 02:40:06.249321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 02:40:06.351387 kernel: SCSI subsystem initialized Sep 13 02:40:06.364340 kernel: Loading iSCSI transport class v2.0-870. Sep 13 02:40:06.377334 kernel: iscsi: registered transport (tcp) Sep 13 02:40:06.404674 kernel: iscsi: registered transport (qla4xxx) Sep 13 02:40:06.404807 kernel: QLogic iSCSI HBA Driver Sep 13 02:40:06.432307 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 02:40:06.462348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 02:40:06.463959 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 02:40:06.528088 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 02:40:06.532231 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 02:40:06.595352 kernel: raid6: sse2x4 gen() 13807 MB/s Sep 13 02:40:06.613344 kernel: raid6: sse2x2 gen() 9492 MB/s Sep 13 02:40:06.632044 kernel: raid6: sse2x1 gen() 9635 MB/s Sep 13 02:40:06.632166 kernel: raid6: using algorithm sse2x4 gen() 13807 MB/s Sep 13 02:40:06.651023 kernel: raid6: .... xor() 7732 MB/s, rmw enabled Sep 13 02:40:06.651172 kernel: raid6: using ssse3x2 recovery algorithm Sep 13 02:40:06.677340 kernel: xor: automatically using best checksumming function avx Sep 13 02:40:06.868433 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 02:40:06.888679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 02:40:06.901121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 02:40:06.925233 systemd-udevd[480]: Using default interface naming scheme 'v255'. Sep 13 02:40:06.935428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 02:40:06.939481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 02:40:06.972980 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Sep 13 02:40:07.009222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 02:40:07.013437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 02:40:07.136196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 02:40:07.139757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 02:40:07.242354 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 13 02:40:07.262495 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 02:40:07.275333 kernel: ACPI: bus type USB registered Sep 13 02:40:07.284627 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 02:40:07.284685 kernel: GPT:17805311 != 125829119 Sep 13 02:40:07.291318 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 02:40:07.291378 kernel: GPT:17805311 != 125829119 Sep 13 02:40:07.291396 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 02:40:07.291421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 02:40:07.301386 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 02:40:07.301452 kernel: usbcore: registered new interface driver usbfs Sep 13 02:40:07.307316 kernel: usbcore: registered new interface driver hub Sep 13 02:40:07.314351 kernel: usbcore: registered new device driver usb Sep 13 02:40:07.335326 kernel: libata version 3.00 loaded. Sep 13 02:40:07.339317 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 02:40:07.357710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 02:40:07.357893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 02:40:07.359595 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 02:40:07.362076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 02:40:07.373267 kernel: AES CTR mode by8 optimization enabled Sep 13 02:40:07.372278 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 02:40:07.410329 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 02:40:07.422577 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 02:40:07.452505 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 13 02:40:07.452866 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 13 02:40:07.453091 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 02:40:07.455315 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 13 02:40:07.455566 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 13 02:40:07.455788 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 13 02:40:07.457357 kernel: hub 1-0:1.0: USB hub found Sep 13 02:40:07.457602 kernel: hub 1-0:1.0: 4 ports detected Sep 13 02:40:07.458328 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 02:40:07.461317 kernel: hub 2-0:1.0: USB hub found Sep 13 02:40:07.461602 kernel: hub 2-0:1.0: 4 ports detected Sep 13 02:40:07.461833 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 02:40:07.462049 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 02:40:07.462240 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 02:40:07.463912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 02:40:07.573714 kernel: scsi host0: ahci Sep 13 02:40:07.574089 kernel: scsi host1: ahci Sep 13 02:40:07.574316 kernel: scsi host2: ahci Sep 13 02:40:07.574508 kernel: scsi host3: ahci Sep 13 02:40:07.574723 kernel: scsi host4: ahci Sep 13 02:40:07.574917 kernel: scsi host5: ahci Sep 13 02:40:07.575107 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Sep 13 02:40:07.575128 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Sep 13 02:40:07.575155 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Sep 13 02:40:07.575175 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Sep 13 02:40:07.575193 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Sep 13 02:40:07.575211 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Sep 13 02:40:07.582307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 02:40:07.596649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 02:40:07.624878 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 02:40:07.625842 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 02:40:07.640087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 02:40:07.642222 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 02:40:07.665039 disk-uuid[629]: Primary Header is updated. Sep 13 02:40:07.665039 disk-uuid[629]: Secondary Entries is updated. Sep 13 02:40:07.665039 disk-uuid[629]: Secondary Header is updated. Sep 13 02:40:07.671345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 02:40:07.681358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 02:40:07.694364 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 02:40:07.804630 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 02:40:07.804738 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 02:40:07.809185 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 02:40:07.809256 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 02:40:07.811086 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 02:40:07.812531 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 02:40:07.867366 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 02:40:07.873546 kernel: usbcore: registered new interface driver usbhid Sep 13 02:40:07.873582 kernel: usbhid: USB HID core driver Sep 13 02:40:07.881662 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 13 02:40:07.881729 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 13 02:40:07.919818 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 02:40:07.922964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 02:40:07.923847 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 02:40:07.925615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 02:40:07.929187 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 02:40:07.952499 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 02:40:08.684869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 02:40:08.687142 disk-uuid[630]: The operation has completed successfully. Sep 13 02:40:08.747192 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 02:40:08.747394 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 02:40:08.797270 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 02:40:08.812899 sh[656]: Success Sep 13 02:40:08.837348 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 02:40:08.837443 kernel: device-mapper: uevent: version 1.0.3 Sep 13 02:40:08.839615 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 02:40:08.854337 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Sep 13 02:40:08.921341 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 02:40:08.924813 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 02:40:08.948895 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 02:40:08.965323 kernel: BTRFS: device fsid ca815b72-c68a-4b5e-8622-cfb6842bab47 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (668) Sep 13 02:40:08.968951 kernel: BTRFS info (device dm-0): first mount of filesystem ca815b72-c68a-4b5e-8622-cfb6842bab47 Sep 13 02:40:08.969004 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:40:08.982049 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 02:40:08.982123 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 02:40:08.985203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 02:40:08.986737 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 02:40:08.987615 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 02:40:08.988751 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 02:40:08.993618 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 02:40:09.023318 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (701) Sep 13 02:40:09.026334 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 02:40:09.029366 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:40:09.035884 kernel: BTRFS info (device vda6): turning on async discard Sep 13 02:40:09.035928 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 02:40:09.043372 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 02:40:09.044705 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 02:40:09.047803 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 02:40:09.135432 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 02:40:09.139848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 02:40:09.204352 systemd-networkd[839]: lo: Link UP Sep 13 02:40:09.204649 systemd-networkd[839]: lo: Gained carrier Sep 13 02:40:09.208710 systemd-networkd[839]: Enumeration completed Sep 13 02:40:09.208886 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 02:40:09.210447 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 02:40:09.210454 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:40:09.211551 systemd[1]: Reached target network.target - Network. Sep 13 02:40:09.213400 systemd-networkd[839]: eth0: Link UP Sep 13 02:40:09.214682 systemd-networkd[839]: eth0: Gained carrier Sep 13 02:40:09.214698 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 02:40:09.251442 systemd-networkd[839]: eth0: DHCPv4 address 10.230.23.130/30, gateway 10.230.23.129 acquired from 10.230.23.129 Sep 13 02:40:09.279787 ignition[753]: Ignition 2.21.0 Sep 13 02:40:09.279810 ignition[753]: Stage: fetch-offline Sep 13 02:40:09.279894 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:09.282470 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 02:40:09.279912 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:09.280099 ignition[753]: parsed url from cmdline: "" Sep 13 02:40:09.280106 ignition[753]: no config URL provided Sep 13 02:40:09.280115 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 02:40:09.280131 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 13 02:40:09.286505 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 02:40:09.280140 ignition[753]: failed to fetch config: resource requires networking Sep 13 02:40:09.280679 ignition[753]: Ignition finished successfully Sep 13 02:40:09.321324 ignition[848]: Ignition 2.21.0 Sep 13 02:40:09.321345 ignition[848]: Stage: fetch Sep 13 02:40:09.321560 ignition[848]: no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:09.321579 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:09.321700 ignition[848]: parsed url from cmdline: "" Sep 13 02:40:09.321707 ignition[848]: no config URL provided Sep 13 02:40:09.321717 ignition[848]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 02:40:09.321732 ignition[848]: no config at "/usr/lib/ignition/user.ign" Sep 13 02:40:09.322995 ignition[848]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 13 02:40:09.323045 ignition[848]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 13 02:40:09.323867 ignition[848]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 13 02:40:09.357493 ignition[848]: GET result: OK Sep 13 02:40:09.358428 ignition[848]: parsing config with SHA512: 2e38808f503eb30530d695e95e67fbc143107d8a3e49442dff6f8f48c8d555be7eda53f9a48d677bc7a88e7dfb1a6e4e92877be36308b97b62bdca1efd675ee1 Sep 13 02:40:09.364800 unknown[848]: fetched base config from "system" Sep 13 02:40:09.364821 unknown[848]: fetched base config from "system" Sep 13 02:40:09.365328 ignition[848]: fetch: fetch complete Sep 13 02:40:09.364834 unknown[848]: fetched user config from "openstack" Sep 13 02:40:09.365337 ignition[848]: fetch: fetch passed Sep 13 02:40:09.368367 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 02:40:09.365410 ignition[848]: Ignition finished successfully Sep 13 02:40:09.371551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 02:40:09.420082 ignition[855]: Ignition 2.21.0 Sep 13 02:40:09.420139 ignition[855]: Stage: kargs Sep 13 02:40:09.420426 ignition[855]: no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:09.420447 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:09.421753 ignition[855]: kargs: kargs passed Sep 13 02:40:09.424734 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 02:40:09.421825 ignition[855]: Ignition finished successfully Sep 13 02:40:09.427540 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 02:40:09.460269 ignition[861]: Ignition 2.21.0 Sep 13 02:40:09.461610 ignition[861]: Stage: disks Sep 13 02:40:09.462728 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:09.462751 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:09.465288 ignition[861]: disks: disks passed Sep 13 02:40:09.465418 ignition[861]: Ignition finished successfully Sep 13 02:40:09.468591 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 02:40:09.469713 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 02:40:09.470939 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 02:40:09.472530 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 02:40:09.474078 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 02:40:09.475471 systemd[1]: Reached target basic.target - Basic System. Sep 13 02:40:09.478954 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 02:40:09.513518 systemd-fsck[869]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 13 02:40:09.518238 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 02:40:09.521597 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 02:40:09.666490 kernel: EXT4-fs (vda9): mounted filesystem 7f859ed0-e8c8-40c1-91d3-e1e964d8c4e8 r/w with ordered data mode. Quota mode: none. Sep 13 02:40:09.667271 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 02:40:09.668626 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 02:40:09.674607 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 02:40:09.683064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 02:40:09.684277 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 02:40:09.689522 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 13 02:40:09.690343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 02:40:09.690390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 02:40:09.702971 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 02:40:09.708454 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 02:40:09.709633 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (877) Sep 13 02:40:09.718868 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 02:40:09.721562 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:40:09.731532 kernel: BTRFS info (device vda6): turning on async discard Sep 13 02:40:09.731599 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 02:40:09.743179 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 02:40:09.792321 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:09.797682 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 02:40:09.807121 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Sep 13 02:40:09.817723 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 02:40:09.823131 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 02:40:09.936355 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 02:40:09.939204 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 02:40:09.940884 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 02:40:09.961324 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 02:40:09.963608 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 02:40:09.993277 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 02:40:10.000819 ignition[993]: INFO : Ignition 2.21.0 Sep 13 02:40:10.003416 ignition[993]: INFO : Stage: mount Sep 13 02:40:10.003416 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:10.003416 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:10.005938 ignition[993]: INFO : mount: mount passed Sep 13 02:40:10.005938 ignition[993]: INFO : Ignition finished successfully Sep 13 02:40:10.004961 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 02:40:10.821341 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:10.972641 systemd-networkd[839]: eth0: Gained IPv6LL Sep 13 02:40:11.351544 systemd-networkd[839]: eth0: Ignoring DHCPv6 address 2a02:1348:179:85e0:24:19ff:fee6:1782/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:85e0:24:19ff:fee6:1782/64 assigned by NDisc. Sep 13 02:40:11.351560 systemd-networkd[839]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 13 02:40:12.833346 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:16.846331 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:16.855649 coreos-metadata[879]: Sep 13 02:40:16.855 WARN failed to locate config-drive, using the metadata service API instead Sep 13 02:40:16.881646 coreos-metadata[879]: Sep 13 02:40:16.881 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 13 02:40:16.895865 coreos-metadata[879]: Sep 13 02:40:16.895 INFO Fetch successful Sep 13 02:40:16.897064 coreos-metadata[879]: Sep 13 02:40:16.896 INFO wrote hostname srv-m9tmw.gb1.brightbox.com to /sysroot/etc/hostname Sep 13 02:40:16.899479 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 13 02:40:16.899684 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 13 02:40:16.904383 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 02:40:16.924637 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 02:40:16.966356 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Sep 13 02:40:16.969919 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 02:40:16.969956 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:40:16.977260 kernel: BTRFS info (device vda6): turning on async discard Sep 13 02:40:16.977324 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 02:40:16.980108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 02:40:17.018357 ignition[1029]: INFO : Ignition 2.21.0 Sep 13 02:40:17.020410 ignition[1029]: INFO : Stage: files Sep 13 02:40:17.020410 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:17.022976 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:17.022976 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 13 02:40:17.024794 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 02:40:17.024794 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 02:40:17.032599 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 02:40:17.032599 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 02:40:17.032599 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 02:40:17.032599 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 02:40:17.032599 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 02:40:17.027091 unknown[1029]: wrote ssh authorized keys file for user: core Sep 13 02:40:17.234989 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 02:40:17.478390 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 02:40:17.479975 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 02:40:17.479975 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 02:40:17.479975 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:40:17.483497 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:40:17.496282 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 02:40:17.815047 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 02:40:19.458325 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:40:19.458325 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 02:40:19.462346 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 02:40:19.462346 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 02:40:19.462346 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 02:40:19.462346 ignition[1029]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 02:40:19.462346 ignition[1029]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 02:40:19.472517 ignition[1029]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 02:40:19.472517 ignition[1029]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 02:40:19.472517 ignition[1029]: INFO : files: files passed Sep 13 02:40:19.472517 ignition[1029]: INFO : Ignition finished successfully Sep 13 02:40:19.465271 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 02:40:19.471617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 02:40:19.477494 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 02:40:19.492982 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 02:40:19.493177 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 02:40:19.504394 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 02:40:19.506086 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 02:40:19.507347 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 02:40:19.509131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 02:40:19.510251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 02:40:19.512884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 02:40:19.571110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 02:40:19.571284 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 02:40:19.573065 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 02:40:19.574402 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 02:40:19.575970 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 02:40:19.577048 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 02:40:19.620584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 02:40:19.623810 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 02:40:19.651759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 02:40:19.653739 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 02:40:19.654671 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 02:40:19.656162 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 02:40:19.656351 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 02:40:19.658391 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 02:40:19.659560 systemd[1]: Stopped target basic.target - Basic System. Sep 13 02:40:19.660990 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 02:40:19.662490 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 02:40:19.663893 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 02:40:19.665507 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 02:40:19.667166 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 02:40:19.668835 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 02:40:19.670452 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 02:40:19.672003 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 02:40:19.673564 systemd[1]: Stopped target swap.target - Swaps. Sep 13 02:40:19.675105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 02:40:19.675386 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 02:40:19.677047 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 02:40:19.678188 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 02:40:19.679706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 02:40:19.679925 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 02:40:19.681258 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 02:40:19.681478 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 02:40:19.683627 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 02:40:19.683876 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 02:40:19.685653 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 02:40:19.685888 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 02:40:19.694170 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 02:40:19.696138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 02:40:19.696406 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 02:40:19.702186 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 02:40:19.702891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 02:40:19.703138 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 02:40:19.704983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 02:40:19.705285 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 02:40:19.718509 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 02:40:19.719343 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 02:40:19.737926 ignition[1083]: INFO : Ignition 2.21.0 Sep 13 02:40:19.737926 ignition[1083]: INFO : Stage: umount Sep 13 02:40:19.743417 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 02:40:19.743417 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 02:40:19.743417 ignition[1083]: INFO : umount: umount passed Sep 13 02:40:19.743417 ignition[1083]: INFO : Ignition finished successfully Sep 13 02:40:19.742176 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 02:40:19.745605 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 02:40:19.745772 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 02:40:19.748213 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 02:40:19.748374 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 02:40:19.751031 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 02:40:19.751106 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 02:40:19.751815 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 02:40:19.751882 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 02:40:19.753331 systemd[1]: Stopped target network.target - Network. Sep 13 02:40:19.754613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 02:40:19.754692 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 02:40:19.756087 systemd[1]: Stopped target paths.target - Path Units. Sep 13 02:40:19.757384 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 02:40:19.762530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 02:40:19.763625 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 02:40:19.765106 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 02:40:19.766888 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 02:40:19.766964 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 02:40:19.768386 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 02:40:19.768448 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 02:40:19.769773 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 02:40:19.769858 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 02:40:19.771153 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 02:40:19.771220 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 02:40:19.772786 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 02:40:19.774810 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 02:40:19.777435 systemd-networkd[839]: eth0: DHCPv6 lease lost Sep 13 02:40:19.780734 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 02:40:19.780913 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 02:40:19.783537 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 02:40:19.783855 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 02:40:19.784064 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 02:40:19.788980 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 02:40:19.789857 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 02:40:19.791488 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 02:40:19.791569 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 02:40:19.793451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 02:40:19.796690 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 02:40:19.796763 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 02:40:19.798501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 02:40:19.798576 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 02:40:19.800846 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 02:40:19.800915 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 02:40:19.801648 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 02:40:19.801712 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 02:40:19.803781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 02:40:19.806964 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 02:40:19.807049 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 02:40:19.817252 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 02:40:19.826776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 02:40:19.828969 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 02:40:19.829141 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 02:40:19.831277 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 02:40:19.831421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 02:40:19.833011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 02:40:19.833067 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 02:40:19.834509 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 02:40:19.834581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 02:40:19.836691 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 02:40:19.836756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 02:40:19.838067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 02:40:19.838140 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 02:40:19.841706 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 02:40:19.843683 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 02:40:19.843759 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 02:40:19.847707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 02:40:19.847779 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 02:40:19.850658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 02:40:19.850736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 02:40:19.857005 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 13 02:40:19.857089 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 02:40:19.857161 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 02:40:19.868154 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 02:40:19.868357 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 02:40:19.893446 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 02:40:19.893631 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 02:40:19.895494 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 02:40:19.896417 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 02:40:19.896507 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 02:40:19.899122 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 02:40:19.935374 systemd[1]: Switching root. Sep 13 02:40:19.969714 systemd-journald[231]: Journal stopped Sep 13 02:40:21.582869 systemd-journald[231]: Received SIGTERM from PID 1 (systemd). Sep 13 02:40:21.582988 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 02:40:21.583020 kernel: SELinux: policy capability open_perms=1 Sep 13 02:40:21.583039 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 02:40:21.583057 kernel: SELinux: policy capability always_check_network=0 Sep 13 02:40:21.583094 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 02:40:21.583116 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 02:40:21.583135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 02:40:21.583153 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 02:40:21.583178 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 02:40:21.583207 kernel: audit: type=1403 audit(1757731220.218:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 02:40:21.583231 systemd[1]: Successfully loaded SELinux policy in 52.191ms. Sep 13 02:40:21.583265 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.636ms. Sep 13 02:40:21.585309 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 02:40:21.585361 systemd[1]: Detected virtualization kvm. Sep 13 02:40:21.585385 systemd[1]: Detected architecture x86-64. Sep 13 02:40:21.585405 systemd[1]: Detected first boot. Sep 13 02:40:21.585425 systemd[1]: Hostname set to . Sep 13 02:40:21.585484 systemd[1]: Initializing machine ID from VM UUID. Sep 13 02:40:21.585515 zram_generator::config[1126]: No configuration found. Sep 13 02:40:21.585536 kernel: Guest personality initialized and is inactive Sep 13 02:40:21.585555 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 02:40:21.585589 kernel: Initialized host personality Sep 13 02:40:21.585608 kernel: NET: Registered PF_VSOCK protocol family Sep 13 02:40:21.585628 systemd[1]: Populated /etc with preset unit settings. Sep 13 02:40:21.585650 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 02:40:21.585670 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 02:40:21.585690 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 02:40:21.585710 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 02:40:21.585730 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 02:40:21.585762 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 02:40:21.585785 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 02:40:21.585806 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 02:40:21.585826 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 02:40:21.585848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 02:40:21.585869 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 02:40:21.585910 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 02:40:21.585943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 02:40:21.585965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 02:40:21.585986 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 02:40:21.586006 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 02:40:21.586033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 02:40:21.586077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 02:40:21.586099 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 02:40:21.586122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 02:40:21.586142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 02:40:21.586161 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 02:40:21.586181 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 02:40:21.586214 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 02:40:21.586237 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 02:40:21.586257 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 02:40:21.586315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 02:40:21.586339 systemd[1]: Reached target slices.target - Slice Units. Sep 13 02:40:21.586359 systemd[1]: Reached target swap.target - Swaps. Sep 13 02:40:21.586379 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 02:40:21.586409 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 02:40:21.586442 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 02:40:21.586465 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 02:40:21.586494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 02:40:21.586515 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 02:40:21.586549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 02:40:21.586571 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 02:40:21.586591 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 02:40:21.586621 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 02:40:21.586643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:21.586664 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 02:40:21.586683 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 02:40:21.586703 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 02:40:21.586723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 02:40:21.586759 systemd[1]: Reached target machines.target - Containers. Sep 13 02:40:21.586780 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 02:40:21.586800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 02:40:21.586821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 02:40:21.586841 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 02:40:21.586861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 02:40:21.586916 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 02:40:21.586938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 02:40:21.586996 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 02:40:21.587024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 02:40:21.587045 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 02:40:21.587078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 02:40:21.587100 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 02:40:21.587119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 02:40:21.587140 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 02:40:21.587161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 02:40:21.587197 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 02:40:21.587222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 02:40:21.587242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 02:40:21.587263 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 02:40:21.587282 kernel: fuse: init (API version 7.41) Sep 13 02:40:21.589352 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 02:40:21.589380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 02:40:21.589402 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 02:40:21.589423 systemd[1]: Stopped verity-setup.service. Sep 13 02:40:21.589458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:21.589503 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 02:40:21.589525 kernel: ACPI: bus type drm_connector registered Sep 13 02:40:21.589547 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 02:40:21.589567 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 02:40:21.589588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 02:40:21.589607 kernel: loop: module loaded Sep 13 02:40:21.589626 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 02:40:21.589646 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 02:40:21.589666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 02:40:21.589699 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 02:40:21.589720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 02:40:21.589740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:40:21.589760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 02:40:21.589780 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 02:40:21.589813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 02:40:21.589835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:40:21.589856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 02:40:21.589887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 02:40:21.589909 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 02:40:21.589930 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:40:21.589950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 02:40:21.589971 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 02:40:21.590021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 02:40:21.590061 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 02:40:21.590084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 02:40:21.590106 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 02:40:21.590127 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 02:40:21.590163 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 02:40:21.590225 systemd-journald[1220]: Collecting audit messages is disabled. Sep 13 02:40:21.590270 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 02:40:21.590309 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 02:40:21.590334 systemd-journald[1220]: Journal started Sep 13 02:40:21.590369 systemd-journald[1220]: Runtime Journal (/run/log/journal/1012e9c276854f03b4b7e5843047e810) is 4.7M, max 38.2M, 33.4M free. Sep 13 02:40:21.076606 systemd[1]: Queued start job for default target multi-user.target. Sep 13 02:40:21.104954 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 02:40:21.105767 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 02:40:21.599321 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 02:40:21.603314 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 02:40:21.611329 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 02:40:21.615313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 02:40:21.621310 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 02:40:21.621372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:40:21.629326 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 02:40:21.633315 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 02:40:21.643316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 02:40:21.650316 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 02:40:21.663951 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 02:40:21.667360 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 02:40:21.672161 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 02:40:21.673545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 02:40:21.702447 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 02:40:21.705513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 02:40:21.716979 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 02:40:21.718091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 02:40:21.723234 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 02:40:21.759502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 02:40:21.774775 systemd-journald[1220]: Time spent on flushing to /var/log/journal/1012e9c276854f03b4b7e5843047e810 is 64.672ms for 1169 entries. Sep 13 02:40:21.774775 systemd-journald[1220]: System Journal (/var/log/journal/1012e9c276854f03b4b7e5843047e810) is 8M, max 584.8M, 576.8M free. Sep 13 02:40:21.863501 systemd-journald[1220]: Received client request to flush runtime journal. Sep 13 02:40:21.863585 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 02:40:21.863627 kernel: loop1: detected capacity change from 0 to 146240 Sep 13 02:40:21.840018 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 02:40:21.875474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 02:40:21.879358 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 02:40:21.890277 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 02:40:21.898038 kernel: loop2: detected capacity change from 0 to 113872 Sep 13 02:40:21.899649 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 02:40:21.956595 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 13 02:40:21.957036 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 13 02:40:21.967201 kernel: loop3: detected capacity change from 0 to 8 Sep 13 02:40:21.970001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 02:40:21.989337 kernel: loop4: detected capacity change from 0 to 229808 Sep 13 02:40:22.052876 kernel: loop5: detected capacity change from 0 to 146240 Sep 13 02:40:22.074359 kernel: loop6: detected capacity change from 0 to 113872 Sep 13 02:40:22.091328 kernel: loop7: detected capacity change from 0 to 8 Sep 13 02:40:22.092997 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 13 02:40:22.093853 (sd-merge)[1288]: Merged extensions into '/usr'. Sep 13 02:40:22.102739 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 02:40:22.102770 systemd[1]: Reloading... Sep 13 02:40:22.289348 zram_generator::config[1311]: No configuration found. Sep 13 02:40:22.533182 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 02:40:22.565934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:40:22.688870 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 02:40:22.690166 systemd[1]: Reloading finished in 584 ms. Sep 13 02:40:22.712795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 02:40:22.717222 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 02:40:22.734534 systemd[1]: Starting ensure-sysext.service... Sep 13 02:40:22.739796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 02:40:22.785784 systemd[1]: Reload requested from client PID 1370 ('systemctl') (unit ensure-sysext.service)... Sep 13 02:40:22.785834 systemd[1]: Reloading... Sep 13 02:40:22.819353 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 02:40:22.819765 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 02:40:22.820308 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 02:40:22.820792 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 02:40:22.822632 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 02:40:22.823059 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Sep 13 02:40:22.823166 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Sep 13 02:40:22.831849 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 02:40:22.831867 systemd-tmpfiles[1371]: Skipping /boot Sep 13 02:40:22.856655 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 02:40:22.856676 systemd-tmpfiles[1371]: Skipping /boot Sep 13 02:40:22.913320 zram_generator::config[1398]: No configuration found. Sep 13 02:40:23.076435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:40:23.200111 systemd[1]: Reloading finished in 413 ms. Sep 13 02:40:23.214509 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 02:40:23.232012 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 02:40:23.242926 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 02:40:23.246666 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 02:40:23.257907 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 02:40:23.261984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 02:40:23.268633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 02:40:23.272670 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 02:40:23.279581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:23.279874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 02:40:23.282690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 02:40:23.291717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 02:40:23.307746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 02:40:23.315533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 02:40:23.315717 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 02:40:23.315894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:23.324053 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:23.325439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 02:40:23.325702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 02:40:23.325839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 02:40:23.335714 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 02:40:23.336541 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:23.344556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:23.344925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 02:40:23.350672 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 02:40:23.352577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 02:40:23.352743 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 02:40:23.352987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:40:23.365699 systemd[1]: Finished ensure-sysext.service. Sep 13 02:40:23.377708 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 02:40:23.380416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 02:40:23.380663 systemd-udevd[1460]: Using default interface naming scheme 'v255'. Sep 13 02:40:23.383764 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 02:40:23.385988 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 02:40:23.402525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:40:23.402899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 02:40:23.413432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 02:40:23.417910 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 02:40:23.428783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:40:23.430448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 02:40:23.431927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:40:23.434140 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:40:23.434906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 02:40:23.437273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 02:40:23.448243 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 02:40:23.448671 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 02:40:23.459035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 02:40:23.467601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 02:40:23.469969 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 02:40:23.491164 augenrules[1508]: No rules Sep 13 02:40:23.495220 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 02:40:23.496730 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 02:40:23.529884 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 02:40:23.763348 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 02:40:23.840934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 02:40:23.848641 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 02:40:23.902906 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 02:40:23.903401 systemd-networkd[1498]: lo: Link UP Sep 13 02:40:23.903867 systemd-networkd[1498]: lo: Gained carrier Sep 13 02:40:23.903930 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 02:40:23.910432 systemd-timesyncd[1474]: No network connectivity, watching for changes. Sep 13 02:40:23.911102 systemd-networkd[1498]: Enumeration completed Sep 13 02:40:23.911333 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 02:40:23.911909 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 02:40:23.912356 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:40:23.914221 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 02:40:23.915097 systemd-networkd[1498]: eth0: Link UP Sep 13 02:40:23.915551 systemd-networkd[1498]: eth0: Gained carrier Sep 13 02:40:23.915574 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 02:40:23.920874 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 02:40:23.937025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 02:40:23.946254 systemd-networkd[1498]: eth0: DHCPv4 address 10.230.23.130/30, gateway 10.230.23.129 acquired from 10.230.23.129 Sep 13 02:40:23.950410 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Sep 13 02:40:23.954581 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 02:40:23.983190 systemd-resolved[1459]: Positive Trust Anchors: Sep 13 02:40:23.984179 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 02:40:23.984238 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 02:40:23.995600 systemd-resolved[1459]: Using system hostname 'srv-m9tmw.gb1.brightbox.com'. Sep 13 02:40:24.000033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 02:40:24.000325 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 02:40:24.000951 systemd[1]: Reached target network.target - Network. Sep 13 02:40:24.001755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 02:40:24.002825 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 02:40:24.003684 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 02:40:24.004605 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 02:40:24.005457 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 02:40:24.006642 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 02:40:24.007474 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 02:40:24.008232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 02:40:24.008995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 02:40:24.009045 systemd[1]: Reached target paths.target - Path Units. Sep 13 02:40:24.010521 systemd[1]: Reached target timers.target - Timer Units. Sep 13 02:40:24.013467 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 02:40:24.016176 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 02:40:24.021579 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 02:40:24.022586 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 02:40:24.024368 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 02:40:24.031717 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 02:40:24.033024 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 02:40:24.034762 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 02:40:24.036935 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 02:40:24.039534 systemd[1]: Reached target basic.target - Basic System. Sep 13 02:40:24.040275 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 02:40:24.040343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 02:40:24.041997 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 02:40:24.045082 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 02:40:24.048137 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 02:40:24.052110 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 02:40:24.057550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 02:40:24.062681 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 02:40:24.063485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 02:40:24.071600 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 02:40:24.077234 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 02:40:24.082323 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:24.084285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 02:40:24.092618 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 02:40:24.096570 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 02:40:24.104426 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 02:40:24.107515 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 02:40:24.108523 jq[1555]: false Sep 13 02:40:24.113772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 02:40:24.117043 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 02:40:24.127631 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 02:40:24.133001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 02:40:24.134367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 02:40:24.134687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 02:40:24.153317 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing passwd entry cache Sep 13 02:40:24.149508 oslogin_cache_refresh[1557]: Refreshing passwd entry cache Sep 13 02:40:24.161139 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 02:40:24.161915 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 02:40:24.166078 extend-filesystems[1556]: Found /dev/vda6 Sep 13 02:40:24.201896 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 13 02:40:24.201962 kernel: ACPI: button: Power Button [PWRF] Sep 13 02:40:24.202042 extend-filesystems[1556]: Found /dev/vda9 Sep 13 02:40:24.204581 jq[1569]: true Sep 13 02:40:24.209458 extend-filesystems[1556]: Checking size of /dev/vda9 Sep 13 02:40:24.206419 oslogin_cache_refresh[1557]: Failure getting users, quitting Sep 13 02:40:24.211136 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting users, quitting Sep 13 02:40:24.211136 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 02:40:24.211136 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing group entry cache Sep 13 02:40:24.206458 oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 02:40:24.214555 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting groups, quitting Sep 13 02:40:24.214555 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 02:40:24.206584 oslogin_cache_refresh[1557]: Refreshing group entry cache Sep 13 02:40:24.214219 oslogin_cache_refresh[1557]: Failure getting groups, quitting Sep 13 02:40:24.214238 oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 02:40:24.226249 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 02:40:24.226953 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 02:40:24.746803 systemd-timesyncd[1474]: Contacted time server 46.17.63.196:123 (1.flatcar.pool.ntp.org). Sep 13 02:40:24.746880 systemd-timesyncd[1474]: Initial clock synchronization to Sat 2025-09-13 02:40:24.746652 UTC. Sep 13 02:40:24.747355 systemd-resolved[1459]: Clock change detected. Flushing caches. Sep 13 02:40:24.753049 update_engine[1565]: I20250913 02:40:24.750644 1565 main.cc:92] Flatcar Update Engine starting Sep 13 02:40:24.763192 dbus-daemon[1553]: [system] SELinux support is enabled Sep 13 02:40:24.764110 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 02:40:24.770228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 02:40:24.770297 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 02:40:24.771257 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 02:40:24.771289 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 02:40:24.777583 extend-filesystems[1556]: Resized partition /dev/vda9 Sep 13 02:40:24.781358 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 02:40:24.781812 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 02:40:24.782710 dbus-daemon[1553]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1498 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 02:40:24.787160 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 02:40:24.811012 extend-filesystems[1598]: resize2fs 1.47.2 (1-Jan-2025) Sep 13 02:40:24.812732 tar[1571]: linux-amd64/LICENSE Sep 13 02:40:24.812732 tar[1571]: linux-amd64/helm Sep 13 02:40:24.806578 systemd[1]: Started update-engine.service - Update Engine. Sep 13 02:40:24.821931 update_engine[1565]: I20250913 02:40:24.809351 1565 update_check_scheduler.cc:74] Next update check in 9m13s Sep 13 02:40:24.806822 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 02:40:24.822070 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 13 02:40:24.812533 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 02:40:24.829447 jq[1587]: true Sep 13 02:40:24.842374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 02:40:24.997098 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 02:40:25.001823 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 02:40:25.008079 dbus-daemon[1553]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1600 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 02:40:25.020270 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 02:40:25.056041 systemd-logind[1564]: New seat seat0. Sep 13 02:40:25.058260 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 02:40:25.098599 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 02:40:25.126408 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 02:40:25.130823 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 02:40:25.131548 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 02:40:25.131548 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 02:40:25.131548 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 02:40:25.157221 extend-filesystems[1556]: Resized filesystem in /dev/vda9 Sep 13 02:40:25.136587 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 02:40:25.177540 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Sep 13 02:40:25.136965 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 02:40:25.145185 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 02:40:25.159560 systemd[1]: Starting sshkeys.service... Sep 13 02:40:25.265002 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 02:40:25.271325 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 02:40:25.313054 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:25.555064 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 02:40:25.623050 containerd[1592]: time="2025-09-13T02:40:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 02:40:25.640003 polkitd[1614]: Started polkitd version 126 Sep 13 02:40:25.659485 containerd[1592]: time="2025-09-13T02:40:25.656517777Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 13 02:40:25.670001 polkitd[1614]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 02:40:25.679987 polkitd[1614]: Loading rules from directory /run/polkit-1/rules.d Sep 13 02:40:25.680092 polkitd[1614]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 13 02:40:25.680468 polkitd[1614]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 13 02:40:25.680508 polkitd[1614]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 13 02:40:25.680577 polkitd[1614]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 02:40:25.686217 polkitd[1614]: Finished loading, compiling and executing 2 rules Sep 13 02:40:25.686671 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 02:40:25.691071 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 02:40:25.695169 polkitd[1614]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 02:40:25.738499 containerd[1592]: time="2025-09-13T02:40:25.737638105Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="27.619µs" Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.738863792Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.738920322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739261477Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739291748Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739347131Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739474930Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739496708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739817196Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739853228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739872059Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 02:40:25.740281 containerd[1592]: time="2025-09-13T02:40:25.739886248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 02:40:25.745773 containerd[1592]: time="2025-09-13T02:40:25.745459769Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 02:40:25.745532 systemd-hostnamed[1600]: Hostname set to (static) Sep 13 02:40:25.748463 containerd[1592]: time="2025-09-13T02:40:25.747635123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 02:40:25.748463 containerd[1592]: time="2025-09-13T02:40:25.747710523Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 02:40:25.748463 containerd[1592]: time="2025-09-13T02:40:25.747734794Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 02:40:25.748463 containerd[1592]: time="2025-09-13T02:40:25.747813375Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 02:40:25.751448 containerd[1592]: time="2025-09-13T02:40:25.750732699Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 02:40:25.751448 containerd[1592]: time="2025-09-13T02:40:25.750848496Z" level=info msg="metadata content store policy set" policy=shared Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.779839311Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780014231Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780077395Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780120378Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780145282Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780173305Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780196212Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780222423Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780248442Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780272431Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780293246Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780321406Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780549165Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 02:40:25.782609 containerd[1592]: time="2025-09-13T02:40:25.780594294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780620524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780639924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780675647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780699442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780725073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780752023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780780371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780808734Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.780847082Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.781001595Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.781061306Z" level=info msg="Start snapshots syncer" Sep 13 02:40:25.783240 containerd[1592]: time="2025-09-13T02:40:25.781129739Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 02:40:25.783711 containerd[1592]: time="2025-09-13T02:40:25.781572871Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 02:40:25.783711 containerd[1592]: time="2025-09-13T02:40:25.781661595Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 02:40:25.783980 containerd[1592]: time="2025-09-13T02:40:25.781859452Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 02:40:25.783980 containerd[1592]: time="2025-09-13T02:40:25.782008382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.787735551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.787795666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.787819445Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.787874024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.787945299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.787971111Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.788057914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.788115692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 02:40:25.788185 containerd[1592]: time="2025-09-13T02:40:25.788140276Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 02:40:25.788743 containerd[1592]: time="2025-09-13T02:40:25.788695420Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 02:40:25.789240 containerd[1592]: time="2025-09-13T02:40:25.788853888Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 02:40:25.789240 containerd[1592]: time="2025-09-13T02:40:25.788880677Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 02:40:25.789506 containerd[1592]: time="2025-09-13T02:40:25.789380235Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 02:40:25.789506 containerd[1592]: time="2025-09-13T02:40:25.789418135Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 02:40:25.790158 containerd[1592]: time="2025-09-13T02:40:25.790127756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 02:40:25.791629 containerd[1592]: time="2025-09-13T02:40:25.790273272Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 02:40:25.791629 containerd[1592]: time="2025-09-13T02:40:25.791148250Z" level=info msg="runtime interface created" Sep 13 02:40:25.791629 containerd[1592]: time="2025-09-13T02:40:25.791185209Z" level=info msg="created NRI interface" Sep 13 02:40:25.791629 containerd[1592]: time="2025-09-13T02:40:25.791203261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 02:40:25.791629 containerd[1592]: time="2025-09-13T02:40:25.791227856Z" level=info msg="Connect containerd service" Sep 13 02:40:25.791629 containerd[1592]: time="2025-09-13T02:40:25.791585467Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 02:40:25.803820 containerd[1592]: time="2025-09-13T02:40:25.801781622Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 02:40:25.840543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 02:40:25.875884 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 02:40:25.954261 systemd-logind[1564]: Watching system buttons on /dev/input/event3 (Power Button) Sep 13 02:40:26.082937 systemd-networkd[1498]: eth0: Gained IPv6LL Sep 13 02:40:26.093416 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 02:40:26.157326 containerd[1592]: time="2025-09-13T02:40:26.157255658Z" level=info msg="Start subscribing containerd event" Sep 13 02:40:26.159186 containerd[1592]: time="2025-09-13T02:40:26.159126347Z" level=info msg="Start recovering state" Sep 13 02:40:26.159446 containerd[1592]: time="2025-09-13T02:40:26.159420751Z" level=info msg="Start event monitor" Sep 13 02:40:26.160165 containerd[1592]: time="2025-09-13T02:40:26.160133071Z" level=info msg="Start cni network conf syncer for default" Sep 13 02:40:26.160258 containerd[1592]: time="2025-09-13T02:40:26.160177332Z" level=info msg="Start streaming server" Sep 13 02:40:26.160258 containerd[1592]: time="2025-09-13T02:40:26.160208146Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 02:40:26.160258 containerd[1592]: time="2025-09-13T02:40:26.160222016Z" level=info msg="runtime interface starting up..." Sep 13 02:40:26.160258 containerd[1592]: time="2025-09-13T02:40:26.160240342Z" level=info msg="starting plugins..." Sep 13 02:40:26.160402 containerd[1592]: time="2025-09-13T02:40:26.160301693Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 02:40:26.165118 containerd[1592]: time="2025-09-13T02:40:26.165076387Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 02:40:26.165309 containerd[1592]: time="2025-09-13T02:40:26.165282860Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 02:40:26.165531 containerd[1592]: time="2025-09-13T02:40:26.165506343Z" level=info msg="containerd successfully booted in 0.570819s" Sep 13 02:40:26.225783 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 02:40:26.323549 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 02:40:26.353845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:40:26.398395 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 02:40:26.491873 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 02:40:26.515143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 02:40:26.689109 tar[1571]: linux-amd64/README.md Sep 13 02:40:26.721010 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 02:40:26.775919 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 02:40:26.810340 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 02:40:26.816332 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 02:40:26.839398 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 02:40:26.840409 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 02:40:26.845426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 02:40:26.874971 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 02:40:26.881566 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 02:40:26.885365 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 02:40:26.886447 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 02:40:26.945685 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 02:40:26.950422 systemd[1]: Started sshd@0-10.230.23.130:22-139.178.89.65:59808.service - OpenSSH per-connection server daemon (139.178.89.65:59808). Sep 13 02:40:27.030421 systemd-networkd[1498]: eth0: Ignoring DHCPv6 address 2a02:1348:179:85e0:24:19ff:fee6:1782/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:85e0:24:19ff:fee6:1782/64 assigned by NDisc. Sep 13 02:40:27.030627 systemd-networkd[1498]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 13 02:40:27.374093 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:27.374274 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:27.663385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:40:27.679651 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 02:40:27.879259 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 59808 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:27.883613 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:27.919848 systemd-logind[1564]: New session 1 of user core. Sep 13 02:40:27.921114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 02:40:27.926466 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 02:40:27.969351 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 02:40:27.980323 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 02:40:27.997432 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:40:28.004948 systemd-logind[1564]: New session c1 of user core. Sep 13 02:40:28.202111 systemd[1728]: Queued start job for default target default.target. Sep 13 02:40:28.209360 systemd[1728]: Created slice app.slice - User Application Slice. Sep 13 02:40:28.209599 systemd[1728]: Reached target paths.target - Paths. Sep 13 02:40:28.209678 systemd[1728]: Reached target timers.target - Timers. Sep 13 02:40:28.213188 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 02:40:28.236982 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 02:40:28.238289 systemd[1728]: Reached target sockets.target - Sockets. Sep 13 02:40:28.238476 systemd[1728]: Reached target basic.target - Basic System. Sep 13 02:40:28.238736 systemd[1728]: Reached target default.target - Main User Target. Sep 13 02:40:28.238913 systemd[1728]: Startup finished in 222ms. Sep 13 02:40:28.239014 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 02:40:28.249439 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 02:40:28.363180 kubelet[1721]: E0913 02:40:28.363092 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:40:28.366412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:40:28.366705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:40:28.367395 systemd[1]: kubelet.service: Consumed 1.117s CPU time, 267.4M memory peak. Sep 13 02:40:28.887630 systemd[1]: Started sshd@1-10.230.23.130:22-139.178.89.65:59812.service - OpenSSH per-connection server daemon (139.178.89.65:59812). Sep 13 02:40:29.394194 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:29.395240 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:29.802735 sshd[1740]: Accepted publickey for core from 139.178.89.65 port 59812 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:29.804905 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:29.813194 systemd-logind[1564]: New session 2 of user core. Sep 13 02:40:29.821326 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 02:40:30.422324 sshd[1744]: Connection closed by 139.178.89.65 port 59812 Sep 13 02:40:30.422177 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:30.428398 systemd[1]: sshd@1-10.230.23.130:22-139.178.89.65:59812.service: Deactivated successfully. Sep 13 02:40:30.430779 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 02:40:30.432615 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Sep 13 02:40:30.434954 systemd-logind[1564]: Removed session 2. Sep 13 02:40:30.579455 systemd[1]: Started sshd@2-10.230.23.130:22-139.178.89.65:56498.service - OpenSSH per-connection server daemon (139.178.89.65:56498). Sep 13 02:40:31.502504 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 56498 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:31.504693 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:31.512252 systemd-logind[1564]: New session 3 of user core. Sep 13 02:40:31.520420 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 02:40:31.975775 login[1708]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 02:40:31.984233 systemd-logind[1564]: New session 4 of user core. Sep 13 02:40:31.986857 login[1709]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 02:40:31.994332 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 02:40:32.004152 systemd-logind[1564]: New session 5 of user core. Sep 13 02:40:32.012789 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 02:40:32.132341 sshd[1752]: Connection closed by 139.178.89.65 port 56498 Sep 13 02:40:32.133915 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:32.140249 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Sep 13 02:40:32.141736 systemd[1]: sshd@2-10.230.23.130:22-139.178.89.65:56498.service: Deactivated successfully. Sep 13 02:40:32.145448 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 02:40:32.148854 systemd-logind[1564]: Removed session 3. Sep 13 02:40:33.418095 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:33.418319 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 13 02:40:33.432838 coreos-metadata[1636]: Sep 13 02:40:33.432 WARN failed to locate config-drive, using the metadata service API instead Sep 13 02:40:33.439094 coreos-metadata[1552]: Sep 13 02:40:33.437 WARN failed to locate config-drive, using the metadata service API instead Sep 13 02:40:33.462440 coreos-metadata[1636]: Sep 13 02:40:33.462 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 13 02:40:33.462637 coreos-metadata[1552]: Sep 13 02:40:33.462 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 13 02:40:33.469776 coreos-metadata[1552]: Sep 13 02:40:33.469 INFO Fetch failed with 404: resource not found Sep 13 02:40:33.469913 coreos-metadata[1552]: Sep 13 02:40:33.469 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 13 02:40:33.470156 coreos-metadata[1552]: Sep 13 02:40:33.470 INFO Fetch successful Sep 13 02:40:33.470350 coreos-metadata[1552]: Sep 13 02:40:33.470 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 13 02:40:33.482081 coreos-metadata[1552]: Sep 13 02:40:33.481 INFO Fetch successful Sep 13 02:40:33.482081 coreos-metadata[1552]: Sep 13 02:40:33.482 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 13 02:40:33.490009 coreos-metadata[1636]: Sep 13 02:40:33.489 INFO Fetch successful Sep 13 02:40:33.490278 coreos-metadata[1636]: Sep 13 02:40:33.490 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 02:40:33.499291 coreos-metadata[1552]: Sep 13 02:40:33.499 INFO Fetch successful Sep 13 02:40:33.499291 coreos-metadata[1552]: Sep 13 02:40:33.499 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 13 02:40:33.515298 coreos-metadata[1552]: Sep 13 02:40:33.515 INFO Fetch successful Sep 13 02:40:33.515465 coreos-metadata[1552]: Sep 13 02:40:33.515 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 13 02:40:33.520898 coreos-metadata[1636]: Sep 13 02:40:33.520 INFO Fetch successful Sep 13 02:40:33.523138 unknown[1636]: wrote ssh authorized keys file for user: core Sep 13 02:40:33.534216 coreos-metadata[1552]: Sep 13 02:40:33.534 INFO Fetch successful Sep 13 02:40:33.561921 update-ssh-keys[1785]: Updated "/home/core/.ssh/authorized_keys" Sep 13 02:40:33.564595 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 02:40:33.567759 systemd[1]: Finished sshkeys.service. Sep 13 02:40:33.578752 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 02:40:33.579675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 02:40:33.579913 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 02:40:33.582149 systemd[1]: Startup finished in 3.607s (kernel) + 14.574s (initrd) + 12.897s (userspace) = 31.080s. Sep 13 02:40:38.520530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 02:40:38.523444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:40:38.731780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:40:38.746616 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 02:40:38.799454 kubelet[1802]: E0913 02:40:38.799218 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:40:38.805275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:40:38.805701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:40:38.806777 systemd[1]: kubelet.service: Consumed 228ms CPU time, 108.2M memory peak. Sep 13 02:40:42.302230 systemd[1]: Started sshd@3-10.230.23.130:22-139.178.89.65:57178.service - OpenSSH per-connection server daemon (139.178.89.65:57178). Sep 13 02:40:43.218422 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 57178 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:43.220395 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:43.227347 systemd-logind[1564]: New session 6 of user core. Sep 13 02:40:43.237242 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 02:40:43.839049 sshd[1811]: Connection closed by 139.178.89.65 port 57178 Sep 13 02:40:43.840051 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:43.845135 systemd[1]: sshd@3-10.230.23.130:22-139.178.89.65:57178.service: Deactivated successfully. Sep 13 02:40:43.847565 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 02:40:43.848925 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Sep 13 02:40:43.851182 systemd-logind[1564]: Removed session 6. Sep 13 02:40:44.005012 systemd[1]: Started sshd@4-10.230.23.130:22-139.178.89.65:57182.service - OpenSSH per-connection server daemon (139.178.89.65:57182). Sep 13 02:40:44.909406 sshd[1817]: Accepted publickey for core from 139.178.89.65 port 57182 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:44.911470 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:44.920298 systemd-logind[1564]: New session 7 of user core. Sep 13 02:40:44.928466 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 02:40:45.521107 sshd[1819]: Connection closed by 139.178.89.65 port 57182 Sep 13 02:40:45.522143 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:45.527156 systemd[1]: sshd@4-10.230.23.130:22-139.178.89.65:57182.service: Deactivated successfully. Sep 13 02:40:45.529616 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 02:40:45.532166 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Sep 13 02:40:45.534180 systemd-logind[1564]: Removed session 7. Sep 13 02:40:45.690764 systemd[1]: Started sshd@5-10.230.23.130:22-139.178.89.65:57186.service - OpenSSH per-connection server daemon (139.178.89.65:57186). Sep 13 02:40:46.655479 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 57186 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:46.657833 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:46.669873 systemd-logind[1564]: New session 8 of user core. Sep 13 02:40:46.679364 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 02:40:47.316016 sshd[1827]: Connection closed by 139.178.89.65 port 57186 Sep 13 02:40:47.317168 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:47.324329 systemd[1]: sshd@5-10.230.23.130:22-139.178.89.65:57186.service: Deactivated successfully. Sep 13 02:40:47.327216 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 02:40:47.328399 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Sep 13 02:40:47.330833 systemd-logind[1564]: Removed session 8. Sep 13 02:40:47.474804 systemd[1]: Started sshd@6-10.230.23.130:22-139.178.89.65:57200.service - OpenSSH per-connection server daemon (139.178.89.65:57200). Sep 13 02:40:48.386817 sshd[1833]: Accepted publickey for core from 139.178.89.65 port 57200 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:48.388945 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:48.396561 systemd-logind[1564]: New session 9 of user core. Sep 13 02:40:48.412378 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 02:40:48.877275 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 02:40:48.878635 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 02:40:48.880442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 02:40:48.883980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:40:48.897897 sudo[1836]: pam_unix(sudo:session): session closed for user root Sep 13 02:40:49.041742 sshd[1835]: Connection closed by 139.178.89.65 port 57200 Sep 13 02:40:49.041412 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:49.047595 systemd[1]: sshd@6-10.230.23.130:22-139.178.89.65:57200.service: Deactivated successfully. Sep 13 02:40:49.050288 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 02:40:49.053321 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Sep 13 02:40:49.055454 systemd-logind[1564]: Removed session 9. Sep 13 02:40:49.084833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:40:49.094448 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 02:40:49.152282 kubelet[1849]: E0913 02:40:49.151968 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:40:49.155739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:40:49.156229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:40:49.157102 systemd[1]: kubelet.service: Consumed 221ms CPU time, 108.5M memory peak. Sep 13 02:40:49.202904 systemd[1]: Started sshd@7-10.230.23.130:22-139.178.89.65:57208.service - OpenSSH per-connection server daemon (139.178.89.65:57208). Sep 13 02:40:50.111691 sshd[1857]: Accepted publickey for core from 139.178.89.65 port 57208 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:50.113607 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:50.121702 systemd-logind[1564]: New session 10 of user core. Sep 13 02:40:50.126284 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 02:40:50.592252 sudo[1861]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 02:40:50.593314 sudo[1861]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 02:40:50.600068 sudo[1861]: pam_unix(sudo:session): session closed for user root Sep 13 02:40:50.608927 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 02:40:50.609485 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 02:40:50.625930 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 02:40:50.686900 augenrules[1883]: No rules Sep 13 02:40:50.688542 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 02:40:50.688939 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 02:40:50.691775 sudo[1860]: pam_unix(sudo:session): session closed for user root Sep 13 02:40:50.835627 sshd[1859]: Connection closed by 139.178.89.65 port 57208 Sep 13 02:40:50.836620 sshd-session[1857]: pam_unix(sshd:session): session closed for user core Sep 13 02:40:50.842950 systemd[1]: sshd@7-10.230.23.130:22-139.178.89.65:57208.service: Deactivated successfully. Sep 13 02:40:50.846278 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 02:40:50.847849 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Sep 13 02:40:50.850346 systemd-logind[1564]: Removed session 10. Sep 13 02:40:51.001126 systemd[1]: Started sshd@8-10.230.23.130:22-139.178.89.65:40314.service - OpenSSH per-connection server daemon (139.178.89.65:40314). Sep 13 02:40:51.897355 sshd[1892]: Accepted publickey for core from 139.178.89.65 port 40314 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:40:51.899287 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:40:51.907707 systemd-logind[1564]: New session 11 of user core. Sep 13 02:40:51.914261 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 02:40:52.397485 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 02:40:52.397971 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 02:40:52.952217 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 02:40:52.968647 (dockerd)[1913]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 02:40:53.318467 dockerd[1913]: time="2025-09-13T02:40:53.317980889Z" level=info msg="Starting up" Sep 13 02:40:53.321947 dockerd[1913]: time="2025-09-13T02:40:53.321701697Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 02:40:53.390854 dockerd[1913]: time="2025-09-13T02:40:53.390778122Z" level=info msg="Loading containers: start." Sep 13 02:40:53.409219 kernel: Initializing XFRM netlink socket Sep 13 02:40:53.757891 systemd-networkd[1498]: docker0: Link UP Sep 13 02:40:53.763771 dockerd[1913]: time="2025-09-13T02:40:53.763703028Z" level=info msg="Loading containers: done." Sep 13 02:40:53.786108 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1204317381-merged.mount: Deactivated successfully. Sep 13 02:40:53.787664 dockerd[1913]: time="2025-09-13T02:40:53.785530239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 02:40:53.788352 dockerd[1913]: time="2025-09-13T02:40:53.787850884Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 13 02:40:53.788743 dockerd[1913]: time="2025-09-13T02:40:53.788716408Z" level=info msg="Initializing buildkit" Sep 13 02:40:53.817871 dockerd[1913]: time="2025-09-13T02:40:53.817817258Z" level=info msg="Completed buildkit initialization" Sep 13 02:40:53.827804 dockerd[1913]: time="2025-09-13T02:40:53.827729376Z" level=info msg="Daemon has completed initialization" Sep 13 02:40:53.828118 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 02:40:53.828739 dockerd[1913]: time="2025-09-13T02:40:53.828680262Z" level=info msg="API listen on /run/docker.sock" Sep 13 02:40:55.081063 containerd[1592]: time="2025-09-13T02:40:55.080831514Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 02:40:55.831643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487214916.mount: Deactivated successfully. Sep 13 02:40:57.048287 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 02:40:57.708769 containerd[1592]: time="2025-09-13T02:40:57.708683385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:40:57.710473 containerd[1592]: time="2025-09-13T02:40:57.710286466Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Sep 13 02:40:57.711247 containerd[1592]: time="2025-09-13T02:40:57.711180023Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:40:57.714646 containerd[1592]: time="2025-09-13T02:40:57.714610480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:40:57.716958 containerd[1592]: time="2025-09-13T02:40:57.716342487Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.635403221s" Sep 13 02:40:57.716958 containerd[1592]: time="2025-09-13T02:40:57.716432187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 02:40:57.717306 containerd[1592]: time="2025-09-13T02:40:57.717231656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 02:40:59.270242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 02:40:59.274785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:40:59.507864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:40:59.523480 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 02:40:59.853949 kubelet[2191]: E0913 02:40:59.852807 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:40:59.858162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:40:59.858711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:40:59.859688 systemd[1]: kubelet.service: Consumed 272ms CPU time, 108.6M memory peak. Sep 13 02:41:00.023065 containerd[1592]: time="2025-09-13T02:41:00.022548733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:00.025051 containerd[1592]: time="2025-09-13T02:41:00.024674829Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Sep 13 02:41:00.026538 containerd[1592]: time="2025-09-13T02:41:00.026503912Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:00.029828 containerd[1592]: time="2025-09-13T02:41:00.029789578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:00.031290 containerd[1592]: time="2025-09-13T02:41:00.031249729Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.313975862s" Sep 13 02:41:00.031391 containerd[1592]: time="2025-09-13T02:41:00.031294395Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 02:41:00.031890 containerd[1592]: time="2025-09-13T02:41:00.031858884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 02:41:01.781605 containerd[1592]: time="2025-09-13T02:41:01.781521795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:01.783624 containerd[1592]: time="2025-09-13T02:41:01.783563749Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Sep 13 02:41:01.784585 containerd[1592]: time="2025-09-13T02:41:01.783743157Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:01.787545 containerd[1592]: time="2025-09-13T02:41:01.787501367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:01.789483 containerd[1592]: time="2025-09-13T02:41:01.789430172Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.75633271s" Sep 13 02:41:01.789719 containerd[1592]: time="2025-09-13T02:41:01.789654190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 02:41:01.791130 containerd[1592]: time="2025-09-13T02:41:01.791085431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 02:41:03.561468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802188519.mount: Deactivated successfully. Sep 13 02:41:04.364243 containerd[1592]: time="2025-09-13T02:41:04.363200344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:04.364243 containerd[1592]: time="2025-09-13T02:41:04.364194096Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Sep 13 02:41:04.365023 containerd[1592]: time="2025-09-13T02:41:04.364978009Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:04.367161 containerd[1592]: time="2025-09-13T02:41:04.367117681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:04.368240 containerd[1592]: time="2025-09-13T02:41:04.368193949Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.577057439s" Sep 13 02:41:04.368385 containerd[1592]: time="2025-09-13T02:41:04.368355402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 02:41:04.369399 containerd[1592]: time="2025-09-13T02:41:04.369355403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 02:41:04.965886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211197944.mount: Deactivated successfully. Sep 13 02:41:06.433980 containerd[1592]: time="2025-09-13T02:41:06.433908394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:06.435332 containerd[1592]: time="2025-09-13T02:41:06.435284862Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Sep 13 02:41:06.437050 containerd[1592]: time="2025-09-13T02:41:06.436139308Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:06.439622 containerd[1592]: time="2025-09-13T02:41:06.439583142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:06.441012 containerd[1592]: time="2025-09-13T02:41:06.440963592Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.071234816s" Sep 13 02:41:06.441118 containerd[1592]: time="2025-09-13T02:41:06.441009082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 02:41:06.442559 containerd[1592]: time="2025-09-13T02:41:06.442529548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 02:41:07.010677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164459900.mount: Deactivated successfully. Sep 13 02:41:07.017057 containerd[1592]: time="2025-09-13T02:41:07.016775646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 02:41:07.018814 containerd[1592]: time="2025-09-13T02:41:07.018782926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 13 02:41:07.019634 containerd[1592]: time="2025-09-13T02:41:07.019574645Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 02:41:07.023080 containerd[1592]: time="2025-09-13T02:41:07.022794097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 02:41:07.024540 containerd[1592]: time="2025-09-13T02:41:07.023987374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 581.326158ms" Sep 13 02:41:07.024682 containerd[1592]: time="2025-09-13T02:41:07.024655465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 02:41:07.025922 containerd[1592]: time="2025-09-13T02:41:07.025812717Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 02:41:07.649362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316235352.mount: Deactivated successfully. Sep 13 02:41:10.020671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 02:41:10.024517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:41:10.351231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:41:10.365888 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 02:41:10.487780 kubelet[2327]: E0913 02:41:10.487660 2327 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:41:10.491263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:41:10.491522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:41:10.492327 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.6M memory peak. Sep 13 02:41:10.511177 update_engine[1565]: I20250913 02:41:10.511044 1565 update_attempter.cc:509] Updating boot flags... Sep 13 02:41:11.888821 containerd[1592]: time="2025-09-13T02:41:11.888751211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:11.890296 containerd[1592]: time="2025-09-13T02:41:11.890209899Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Sep 13 02:41:11.891885 containerd[1592]: time="2025-09-13T02:41:11.891183511Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:11.894638 containerd[1592]: time="2025-09-13T02:41:11.894602108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:11.896432 containerd[1592]: time="2025-09-13T02:41:11.896392440Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.870329059s" Sep 13 02:41:11.896583 containerd[1592]: time="2025-09-13T02:41:11.896554678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 02:41:18.547931 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:41:18.548777 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.6M memory peak. Sep 13 02:41:18.552008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:41:18.593655 systemd[1]: Reload requested from client PID 2381 ('systemctl') (unit session-11.scope)... Sep 13 02:41:18.593700 systemd[1]: Reloading... Sep 13 02:41:18.762076 zram_generator::config[2426]: No configuration found. Sep 13 02:41:18.923271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:41:19.106078 systemd[1]: Reloading finished in 511 ms. Sep 13 02:41:19.183049 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 02:41:19.183274 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 02:41:19.184130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:41:19.184232 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98.3M memory peak. Sep 13 02:41:19.186872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:41:19.373893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:41:19.385504 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 02:41:19.493411 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:41:19.493411 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 02:41:19.493411 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:41:19.494079 kubelet[2493]: I0913 02:41:19.493357 2493 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 02:41:20.382555 kubelet[2493]: I0913 02:41:20.382073 2493 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 02:41:20.382555 kubelet[2493]: I0913 02:41:20.382312 2493 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 02:41:20.382797 kubelet[2493]: I0913 02:41:20.382664 2493 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 02:41:20.421089 kubelet[2493]: E0913 02:41:20.420990 2493 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.23.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 02:41:20.423930 kubelet[2493]: I0913 02:41:20.423883 2493 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:41:20.455080 kubelet[2493]: I0913 02:41:20.454558 2493 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 02:41:20.463346 kubelet[2493]: I0913 02:41:20.463302 2493 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 02:41:20.470276 kubelet[2493]: I0913 02:41:20.470229 2493 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 02:41:20.473558 kubelet[2493]: I0913 02:41:20.470276 2493 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-m9tmw.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 02:41:20.473558 kubelet[2493]: I0913 02:41:20.473549 2493 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 02:41:20.473558 kubelet[2493]: I0913 02:41:20.473571 2493 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 02:41:20.474020 kubelet[2493]: I0913 02:41:20.473788 2493 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:41:20.477589 kubelet[2493]: I0913 02:41:20.477244 2493 kubelet.go:480] "Attempting to sync node with API server" Sep 13 02:41:20.477589 kubelet[2493]: I0913 02:41:20.477303 2493 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 02:41:20.477589 kubelet[2493]: I0913 02:41:20.477357 2493 kubelet.go:386] "Adding apiserver pod source" Sep 13 02:41:20.479962 kubelet[2493]: I0913 02:41:20.479372 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 02:41:20.499049 kubelet[2493]: E0913 02:41:20.498617 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.23.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-m9tmw.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 02:41:20.503590 kubelet[2493]: I0913 02:41:20.502722 2493 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 02:41:20.503590 kubelet[2493]: I0913 02:41:20.503512 2493 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 02:41:20.505840 kubelet[2493]: W0913 02:41:20.505810 2493 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 02:41:20.515046 kubelet[2493]: E0913 02:41:20.514066 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.23.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 02:41:20.515046 kubelet[2493]: I0913 02:41:20.514427 2493 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 02:41:20.515046 kubelet[2493]: I0913 02:41:20.514510 2493 server.go:1289] "Started kubelet" Sep 13 02:41:20.520549 kubelet[2493]: I0913 02:41:20.520518 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 02:41:20.522060 kubelet[2493]: I0913 02:41:20.522004 2493 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 02:41:20.538464 kubelet[2493]: I0913 02:41:20.538437 2493 server.go:317] "Adding debug handlers to kubelet server" Sep 13 02:41:20.544063 kubelet[2493]: I0913 02:41:20.533612 2493 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 02:41:20.544063 kubelet[2493]: I0913 02:41:20.543094 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 02:41:20.544063 kubelet[2493]: I0913 02:41:20.543464 2493 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 02:41:20.544063 kubelet[2493]: E0913 02:41:20.533684 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" Sep 13 02:41:20.544063 kubelet[2493]: I0913 02:41:20.530407 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 02:41:20.544063 kubelet[2493]: I0913 02:41:20.533635 2493 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 02:41:20.544063 kubelet[2493]: I0913 02:41:20.543714 2493 reconciler.go:26] "Reconciler: start to sync state" Sep 13 02:41:20.549832 kubelet[2493]: E0913 02:41:20.544704 2493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.23.130:6443/api/v1/namespaces/default/events\": dial tcp 10.230.23.130:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-m9tmw.gb1.brightbox.com.1864b74969a81d96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-m9tmw.gb1.brightbox.com,UID:srv-m9tmw.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-m9tmw.gb1.brightbox.com,},FirstTimestamp:2025-09-13 02:41:20.514456982 +0000 UTC m=+1.123788824,LastTimestamp:2025-09-13 02:41:20.514456982 +0000 UTC m=+1.123788824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-m9tmw.gb1.brightbox.com,}" Sep 13 02:41:20.550638 kubelet[2493]: E0913 02:41:20.550599 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.23.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 02:41:20.550991 kubelet[2493]: I0913 02:41:20.550966 2493 factory.go:223] Registration of the systemd container factory successfully Sep 13 02:41:20.551259 kubelet[2493]: I0913 02:41:20.551232 2493 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 02:41:20.551992 kubelet[2493]: E0913 02:41:20.551957 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-m9tmw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.130:6443: connect: connection refused" interval="200ms" Sep 13 02:41:20.555874 kubelet[2493]: E0913 02:41:20.555847 2493 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 02:41:20.556249 kubelet[2493]: I0913 02:41:20.556225 2493 factory.go:223] Registration of the containerd container factory successfully Sep 13 02:41:20.576079 kubelet[2493]: I0913 02:41:20.575903 2493 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 02:41:20.578266 kubelet[2493]: I0913 02:41:20.577769 2493 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 02:41:20.578266 kubelet[2493]: I0913 02:41:20.577812 2493 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 02:41:20.578266 kubelet[2493]: I0913 02:41:20.577845 2493 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 02:41:20.578266 kubelet[2493]: I0913 02:41:20.577863 2493 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 02:41:20.578266 kubelet[2493]: E0913 02:41:20.577925 2493 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 02:41:20.582825 kubelet[2493]: E0913 02:41:20.582793 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.23.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 02:41:20.587310 kubelet[2493]: I0913 02:41:20.587268 2493 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 02:41:20.587310 kubelet[2493]: I0913 02:41:20.587295 2493 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 02:41:20.587429 kubelet[2493]: I0913 02:41:20.587370 2493 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:41:20.589875 kubelet[2493]: I0913 02:41:20.589505 2493 policy_none.go:49] "None policy: Start" Sep 13 02:41:20.589875 kubelet[2493]: I0913 02:41:20.589548 2493 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 02:41:20.589875 kubelet[2493]: I0913 02:41:20.589575 2493 state_mem.go:35] "Initializing new in-memory state store" Sep 13 02:41:20.600879 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 02:41:20.615703 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 02:41:20.629539 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 02:41:20.632731 kubelet[2493]: E0913 02:41:20.632617 2493 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 02:41:20.634312 kubelet[2493]: I0913 02:41:20.632968 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 02:41:20.634312 kubelet[2493]: I0913 02:41:20.633006 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 02:41:20.634908 kubelet[2493]: I0913 02:41:20.634883 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 02:41:20.636552 kubelet[2493]: E0913 02:41:20.636345 2493 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 02:41:20.636552 kubelet[2493]: E0913 02:41:20.636425 2493 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-m9tmw.gb1.brightbox.com\" not found" Sep 13 02:41:20.693282 systemd[1]: Created slice kubepods-burstable-pod722edc8d2e4f3474c2070ed206ce88c6.slice - libcontainer container kubepods-burstable-pod722edc8d2e4f3474c2070ed206ce88c6.slice. Sep 13 02:41:20.709199 kubelet[2493]: E0913 02:41:20.708967 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.717975 systemd[1]: Created slice kubepods-burstable-pod9c13c72d647fc5f33c5c96c4238a7b01.slice - libcontainer container kubepods-burstable-pod9c13c72d647fc5f33c5c96c4238a7b01.slice. Sep 13 02:41:20.721782 kubelet[2493]: E0913 02:41:20.721750 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.732528 systemd[1]: Created slice kubepods-burstable-pod5bd4f808bb32d82da17c702d57d4b685.slice - libcontainer container kubepods-burstable-pod5bd4f808bb32d82da17c702d57d4b685.slice. Sep 13 02:41:20.735952 kubelet[2493]: E0913 02:41:20.735522 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.737395 kubelet[2493]: I0913 02:41:20.737363 2493 kubelet_node_status.go:75] "Attempting to register node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.737975 kubelet[2493]: E0913 02:41:20.737941 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.130:6443/api/v1/nodes\": dial tcp 10.230.23.130:6443: connect: connection refused" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.752720 kubelet[2493]: E0913 02:41:20.752664 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-m9tmw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.130:6443: connect: connection refused" interval="400ms" Sep 13 02:41:20.845004 kubelet[2493]: I0913 02:41:20.844534 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-flexvolume-dir\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.845882 kubelet[2493]: I0913 02:41:20.845823 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-kubeconfig\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.846490 kubelet[2493]: I0913 02:41:20.846461 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/722edc8d2e4f3474c2070ed206ce88c6-ca-certs\") pod \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" (UID: \"722edc8d2e4f3474c2070ed206ce88c6\") " pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.848046 kubelet[2493]: I0913 02:41:20.847562 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/722edc8d2e4f3474c2070ed206ce88c6-k8s-certs\") pod \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" (UID: \"722edc8d2e4f3474c2070ed206ce88c6\") " pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.848046 kubelet[2493]: I0913 02:41:20.847617 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-ca-certs\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.848046 kubelet[2493]: I0913 02:41:20.847666 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-k8s-certs\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.848046 kubelet[2493]: I0913 02:41:20.847706 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.848046 kubelet[2493]: I0913 02:41:20.847741 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bd4f808bb32d82da17c702d57d4b685-kubeconfig\") pod \"kube-scheduler-srv-m9tmw.gb1.brightbox.com\" (UID: \"5bd4f808bb32d82da17c702d57d4b685\") " pod="kube-system/kube-scheduler-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.848292 kubelet[2493]: I0913 02:41:20.847788 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/722edc8d2e4f3474c2070ed206ce88c6-usr-share-ca-certificates\") pod \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" (UID: \"722edc8d2e4f3474c2070ed206ce88c6\") " pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.941908 kubelet[2493]: I0913 02:41:20.941779 2493 kubelet_node_status.go:75] "Attempting to register node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:20.942851 kubelet[2493]: E0913 02:41:20.942807 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.130:6443/api/v1/nodes\": dial tcp 10.230.23.130:6443: connect: connection refused" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:21.011622 containerd[1592]: time="2025-09-13T02:41:21.011491140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-m9tmw.gb1.brightbox.com,Uid:722edc8d2e4f3474c2070ed206ce88c6,Namespace:kube-system,Attempt:0,}" Sep 13 02:41:21.023785 containerd[1592]: time="2025-09-13T02:41:21.023728136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-m9tmw.gb1.brightbox.com,Uid:9c13c72d647fc5f33c5c96c4238a7b01,Namespace:kube-system,Attempt:0,}" Sep 13 02:41:21.038736 containerd[1592]: time="2025-09-13T02:41:21.038685700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-m9tmw.gb1.brightbox.com,Uid:5bd4f808bb32d82da17c702d57d4b685,Namespace:kube-system,Attempt:0,}" Sep 13 02:41:21.155218 kubelet[2493]: E0913 02:41:21.154446 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-m9tmw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.130:6443: connect: connection refused" interval="800ms" Sep 13 02:41:21.235656 containerd[1592]: time="2025-09-13T02:41:21.234655210Z" level=info msg="connecting to shim eeabab27d8d42dfc82b0bfc665eac0a58d2c39c353dc86f1c34774fa7a14d074" address="unix:///run/containerd/s/e369cee87aaa5931542c21cd560c2fd67ac768b6c3d6d5345aca81cfe3271007" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:21.237673 containerd[1592]: time="2025-09-13T02:41:21.237570509Z" level=info msg="connecting to shim f52371bf92f35c801a5172820295c6551c6856e168b6a426acd767955445b027" address="unix:///run/containerd/s/357e3a54f5664df0907f8c76984f406614e744c3b7f3b2be578f1e353540e9ba" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:21.250327 containerd[1592]: time="2025-09-13T02:41:21.250259163Z" level=info msg="connecting to shim d24627c69fd33267b4f4537a2e4836a8fd5fbaa1956b687e95121136011380a6" address="unix:///run/containerd/s/7561c27e63300a9b2415911dbab897a3fd56e814ff9e24bbb749b1fcadf1d501" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:21.348649 kubelet[2493]: I0913 02:41:21.348095 2493 kubelet_node_status.go:75] "Attempting to register node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:21.348649 kubelet[2493]: E0913 02:41:21.348607 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.130:6443/api/v1/nodes\": dial tcp 10.230.23.130:6443: connect: connection refused" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:21.395443 systemd[1]: Started cri-containerd-d24627c69fd33267b4f4537a2e4836a8fd5fbaa1956b687e95121136011380a6.scope - libcontainer container d24627c69fd33267b4f4537a2e4836a8fd5fbaa1956b687e95121136011380a6. Sep 13 02:41:21.398686 systemd[1]: Started cri-containerd-eeabab27d8d42dfc82b0bfc665eac0a58d2c39c353dc86f1c34774fa7a14d074.scope - libcontainer container eeabab27d8d42dfc82b0bfc665eac0a58d2c39c353dc86f1c34774fa7a14d074. Sep 13 02:41:21.402275 systemd[1]: Started cri-containerd-f52371bf92f35c801a5172820295c6551c6856e168b6a426acd767955445b027.scope - libcontainer container f52371bf92f35c801a5172820295c6551c6856e168b6a426acd767955445b027. Sep 13 02:41:21.484482 kubelet[2493]: E0913 02:41:21.484410 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.23.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 02:41:21.524572 containerd[1592]: time="2025-09-13T02:41:21.524476449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-m9tmw.gb1.brightbox.com,Uid:722edc8d2e4f3474c2070ed206ce88c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f52371bf92f35c801a5172820295c6551c6856e168b6a426acd767955445b027\"" Sep 13 02:41:21.527867 kubelet[2493]: E0913 02:41:21.527794 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.23.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 02:41:21.544486 containerd[1592]: time="2025-09-13T02:41:21.543552381Z" level=info msg="CreateContainer within sandbox \"f52371bf92f35c801a5172820295c6551c6856e168b6a426acd767955445b027\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 02:41:21.547744 containerd[1592]: time="2025-09-13T02:41:21.547485999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-m9tmw.gb1.brightbox.com,Uid:9c13c72d647fc5f33c5c96c4238a7b01,Namespace:kube-system,Attempt:0,} returns sandbox id \"d24627c69fd33267b4f4537a2e4836a8fd5fbaa1956b687e95121136011380a6\"" Sep 13 02:41:21.556339 containerd[1592]: time="2025-09-13T02:41:21.555578630Z" level=info msg="CreateContainer within sandbox \"d24627c69fd33267b4f4537a2e4836a8fd5fbaa1956b687e95121136011380a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 02:41:21.571274 containerd[1592]: time="2025-09-13T02:41:21.571007918Z" level=info msg="Container 4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:21.572375 containerd[1592]: time="2025-09-13T02:41:21.572318787Z" level=info msg="Container caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:21.593497 containerd[1592]: time="2025-09-13T02:41:21.593425301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-m9tmw.gb1.brightbox.com,Uid:5bd4f808bb32d82da17c702d57d4b685,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeabab27d8d42dfc82b0bfc665eac0a58d2c39c353dc86f1c34774fa7a14d074\"" Sep 13 02:41:21.598860 containerd[1592]: time="2025-09-13T02:41:21.598816650Z" level=info msg="CreateContainer within sandbox \"f52371bf92f35c801a5172820295c6551c6856e168b6a426acd767955445b027\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6\"" Sep 13 02:41:21.599693 containerd[1592]: time="2025-09-13T02:41:21.599137171Z" level=info msg="CreateContainer within sandbox \"d24627c69fd33267b4f4537a2e4836a8fd5fbaa1956b687e95121136011380a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796\"" Sep 13 02:41:21.599693 containerd[1592]: time="2025-09-13T02:41:21.599304939Z" level=info msg="CreateContainer within sandbox \"eeabab27d8d42dfc82b0bfc665eac0a58d2c39c353dc86f1c34774fa7a14d074\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 02:41:21.600216 containerd[1592]: time="2025-09-13T02:41:21.600184993Z" level=info msg="StartContainer for \"4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6\"" Sep 13 02:41:21.602397 containerd[1592]: time="2025-09-13T02:41:21.600347948Z" level=info msg="StartContainer for \"caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796\"" Sep 13 02:41:21.605054 containerd[1592]: time="2025-09-13T02:41:21.604465268Z" level=info msg="connecting to shim 4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6" address="unix:///run/containerd/s/357e3a54f5664df0907f8c76984f406614e744c3b7f3b2be578f1e353540e9ba" protocol=ttrpc version=3 Sep 13 02:41:21.605054 containerd[1592]: time="2025-09-13T02:41:21.604787691Z" level=info msg="connecting to shim caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796" address="unix:///run/containerd/s/7561c27e63300a9b2415911dbab897a3fd56e814ff9e24bbb749b1fcadf1d501" protocol=ttrpc version=3 Sep 13 02:41:21.613285 containerd[1592]: time="2025-09-13T02:41:21.613208265Z" level=info msg="Container e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:21.621612 containerd[1592]: time="2025-09-13T02:41:21.621568344Z" level=info msg="CreateContainer within sandbox \"eeabab27d8d42dfc82b0bfc665eac0a58d2c39c353dc86f1c34774fa7a14d074\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146\"" Sep 13 02:41:21.622575 containerd[1592]: time="2025-09-13T02:41:21.622511153Z" level=info msg="StartContainer for \"e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146\"" Sep 13 02:41:21.626321 containerd[1592]: time="2025-09-13T02:41:21.626287928Z" level=info msg="connecting to shim e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146" address="unix:///run/containerd/s/e369cee87aaa5931542c21cd560c2fd67ac768b6c3d6d5345aca81cfe3271007" protocol=ttrpc version=3 Sep 13 02:41:21.643588 systemd[1]: Started cri-containerd-caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796.scope - libcontainer container caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796. Sep 13 02:41:21.655255 systemd[1]: Started cri-containerd-4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6.scope - libcontainer container 4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6. Sep 13 02:41:21.683398 systemd[1]: Started cri-containerd-e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146.scope - libcontainer container e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146. Sep 13 02:41:21.770121 kubelet[2493]: E0913 02:41:21.769726 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.23.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-m9tmw.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 02:41:21.781524 containerd[1592]: time="2025-09-13T02:41:21.780275812Z" level=info msg="StartContainer for \"caf30ce08d2cbd1e822d74b6806f20dcc46102599bb223da4fe24110ee065796\" returns successfully" Sep 13 02:41:21.788249 kubelet[2493]: E0913 02:41:21.788111 2493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.23.130:6443/api/v1/namespaces/default/events\": dial tcp 10.230.23.130:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-m9tmw.gb1.brightbox.com.1864b74969a81d96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-m9tmw.gb1.brightbox.com,UID:srv-m9tmw.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-m9tmw.gb1.brightbox.com,},FirstTimestamp:2025-09-13 02:41:20.514456982 +0000 UTC m=+1.123788824,LastTimestamp:2025-09-13 02:41:20.514456982 +0000 UTC m=+1.123788824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-m9tmw.gb1.brightbox.com,}" Sep 13 02:41:21.805644 containerd[1592]: time="2025-09-13T02:41:21.805479316Z" level=info msg="StartContainer for \"4f0f3c5ce68189cff70c2fa31f12d2458950ddef84f7f02a8d52d527fc61f4e6\" returns successfully" Sep 13 02:41:21.829228 kubelet[2493]: E0913 02:41:21.829138 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.23.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.23.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 02:41:21.861906 containerd[1592]: time="2025-09-13T02:41:21.861849761Z" level=info msg="StartContainer for \"e444eb19ef1b1c39fae1a4904d0f4adb0bbe52d244f7b90de2b4e328f3b77146\" returns successfully" Sep 13 02:41:21.956055 kubelet[2493]: E0913 02:41:21.955960 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-m9tmw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.130:6443: connect: connection refused" interval="1.6s" Sep 13 02:41:22.152303 kubelet[2493]: I0913 02:41:22.151872 2493 kubelet_node_status.go:75] "Attempting to register node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:22.153708 kubelet[2493]: E0913 02:41:22.153121 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.130:6443/api/v1/nodes\": dial tcp 10.230.23.130:6443: connect: connection refused" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:22.603953 kubelet[2493]: E0913 02:41:22.603534 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:22.606406 kubelet[2493]: E0913 02:41:22.606381 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:22.611056 kubelet[2493]: E0913 02:41:22.609801 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:23.614498 kubelet[2493]: E0913 02:41:23.613892 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:23.614498 kubelet[2493]: E0913 02:41:23.613967 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:23.614498 kubelet[2493]: E0913 02:41:23.614332 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:23.757063 kubelet[2493]: I0913 02:41:23.756746 2493 kubelet_node_status.go:75] "Attempting to register node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:24.618721 kubelet[2493]: E0913 02:41:24.618683 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:24.620095 kubelet[2493]: E0913 02:41:24.619353 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.216085 kubelet[2493]: E0913 02:41:25.215986 2493 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-m9tmw.gb1.brightbox.com\" not found" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.299179 kubelet[2493]: I0913 02:41:25.299119 2493 kubelet_node_status.go:78] "Successfully registered node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.335058 kubelet[2493]: I0913 02:41:25.334873 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.346011 kubelet[2493]: E0913 02:41:25.345947 2493 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.346011 kubelet[2493]: I0913 02:41:25.345999 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.351412 kubelet[2493]: E0913 02:41:25.351339 2493 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.351412 kubelet[2493]: I0913 02:41:25.351405 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.353408 kubelet[2493]: E0913 02:41:25.353374 2493 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-m9tmw.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:25.505274 kubelet[2493]: I0913 02:41:25.505007 2493 apiserver.go:52] "Watching apiserver" Sep 13 02:41:25.544764 kubelet[2493]: I0913 02:41:25.544689 2493 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 02:41:27.457098 systemd[1]: Reload requested from client PID 2777 ('systemctl') (unit session-11.scope)... Sep 13 02:41:27.457176 systemd[1]: Reloading... Sep 13 02:41:27.651293 zram_generator::config[2825]: No configuration found. Sep 13 02:41:27.829374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:41:28.050438 systemd[1]: Reloading finished in 591 ms. Sep 13 02:41:28.097773 kubelet[2493]: I0913 02:41:28.097582 2493 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:41:28.098837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:41:28.111987 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 02:41:28.112452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:41:28.112555 systemd[1]: kubelet.service: Consumed 1.625s CPU time, 130.8M memory peak. Sep 13 02:41:28.119260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 02:41:28.425996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 02:41:28.441676 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 02:41:28.563356 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:41:28.563356 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 02:41:28.563356 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:41:28.565781 kubelet[2886]: I0913 02:41:28.564528 2886 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 02:41:28.579441 kubelet[2886]: I0913 02:41:28.578926 2886 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 02:41:28.579441 kubelet[2886]: I0913 02:41:28.578979 2886 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 02:41:28.579441 kubelet[2886]: I0913 02:41:28.579378 2886 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 02:41:28.582205 kubelet[2886]: I0913 02:41:28.581748 2886 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 02:41:28.593410 kubelet[2886]: I0913 02:41:28.593350 2886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:41:28.606138 kubelet[2886]: I0913 02:41:28.605785 2886 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 02:41:28.612977 kubelet[2886]: I0913 02:41:28.612913 2886 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 02:41:28.614972 kubelet[2886]: I0913 02:41:28.614669 2886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 02:41:28.614972 kubelet[2886]: I0913 02:41:28.614710 2886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-m9tmw.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 02:41:28.614972 kubelet[2886]: I0913 02:41:28.614902 2886 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 02:41:28.614972 kubelet[2886]: I0913 02:41:28.614918 2886 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 02:41:28.616107 kubelet[2886]: I0913 02:41:28.614978 2886 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:41:28.617670 kubelet[2886]: I0913 02:41:28.617278 2886 kubelet.go:480] "Attempting to sync node with API server" Sep 13 02:41:28.617670 kubelet[2886]: I0913 02:41:28.617312 2886 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 02:41:28.617670 kubelet[2886]: I0913 02:41:28.617376 2886 kubelet.go:386] "Adding apiserver pod source" Sep 13 02:41:28.617670 kubelet[2886]: I0913 02:41:28.617404 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 02:41:28.625277 kubelet[2886]: I0913 02:41:28.625016 2886 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 02:41:28.628440 kubelet[2886]: I0913 02:41:28.627966 2886 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 02:41:28.640083 kubelet[2886]: I0913 02:41:28.637338 2886 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 02:41:28.640083 kubelet[2886]: I0913 02:41:28.637402 2886 server.go:1289] "Started kubelet" Sep 13 02:41:28.643737 kubelet[2886]: I0913 02:41:28.641773 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 02:41:28.653499 kubelet[2886]: I0913 02:41:28.653459 2886 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 02:41:28.656063 kubelet[2886]: I0913 02:41:28.654269 2886 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 02:41:28.656063 kubelet[2886]: E0913 02:41:28.655467 2886 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-m9tmw.gb1.brightbox.com\" not found" Sep 13 02:41:28.657225 kubelet[2886]: I0913 02:41:28.656929 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 02:41:28.657586 kubelet[2886]: I0913 02:41:28.657556 2886 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 02:41:28.658792 kubelet[2886]: I0913 02:41:28.658766 2886 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 02:41:28.661061 kubelet[2886]: I0913 02:41:28.660247 2886 reconciler.go:26] "Reconciler: start to sync state" Sep 13 02:41:28.671427 kubelet[2886]: I0913 02:41:28.669356 2886 server.go:317] "Adding debug handlers to kubelet server" Sep 13 02:41:28.682296 kubelet[2886]: I0913 02:41:28.669773 2886 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 02:41:28.686154 kubelet[2886]: I0913 02:41:28.685251 2886 factory.go:223] Registration of the systemd container factory successfully Sep 13 02:41:28.686154 kubelet[2886]: I0913 02:41:28.685396 2886 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 02:41:28.688421 kubelet[2886]: E0913 02:41:28.688361 2886 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 02:41:28.699106 kubelet[2886]: I0913 02:41:28.694986 2886 factory.go:223] Registration of the containerd container factory successfully Sep 13 02:41:28.736628 kubelet[2886]: I0913 02:41:28.736499 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 02:41:28.739165 kubelet[2886]: I0913 02:41:28.739139 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 02:41:28.739283 kubelet[2886]: I0913 02:41:28.739265 2886 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 02:41:28.739424 kubelet[2886]: I0913 02:41:28.739403 2886 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 02:41:28.739605 kubelet[2886]: I0913 02:41:28.739588 2886 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 02:41:28.739764 kubelet[2886]: E0913 02:41:28.739735 2886 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 02:41:28.820932 kubelet[2886]: I0913 02:41:28.820392 2886 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 02:41:28.821199 kubelet[2886]: I0913 02:41:28.821174 2886 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 02:41:28.821310 kubelet[2886]: I0913 02:41:28.821293 2886 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:41:28.821955 kubelet[2886]: I0913 02:41:28.821907 2886 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 02:41:28.822113 kubelet[2886]: I0913 02:41:28.822067 2886 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 02:41:28.822217 kubelet[2886]: I0913 02:41:28.822199 2886 policy_none.go:49] "None policy: Start" Sep 13 02:41:28.822340 kubelet[2886]: I0913 02:41:28.822321 2886 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 02:41:28.822453 kubelet[2886]: I0913 02:41:28.822434 2886 state_mem.go:35] "Initializing new in-memory state store" Sep 13 02:41:28.822697 kubelet[2886]: I0913 02:41:28.822675 2886 state_mem.go:75] "Updated machine memory state" Sep 13 02:41:28.834774 kubelet[2886]: E0913 02:41:28.834742 2886 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 02:41:28.837353 kubelet[2886]: I0913 02:41:28.837330 2886 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 02:41:28.837647 kubelet[2886]: I0913 02:41:28.837600 2886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 02:41:28.838464 kubelet[2886]: I0913 02:41:28.838442 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 02:41:28.841688 kubelet[2886]: I0913 02:41:28.840851 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.842387 kubelet[2886]: E0913 02:41:28.842361 2886 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 02:41:28.846075 kubelet[2886]: I0913 02:41:28.844691 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.846649 kubelet[2886]: I0913 02:41:28.846553 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.873106 kubelet[2886]: I0913 02:41:28.872890 2886 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:41:28.875426 kubelet[2886]: I0913 02:41:28.874777 2886 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:41:28.875985 kubelet[2886]: I0913 02:41:28.875923 2886 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:41:28.963904 kubelet[2886]: I0913 02:41:28.963753 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-ca-certs\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.964325 kubelet[2886]: I0913 02:41:28.964280 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-k8s-certs\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.964576 kubelet[2886]: I0913 02:41:28.964527 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-kubeconfig\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.964836 kubelet[2886]: I0913 02:41:28.964777 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/722edc8d2e4f3474c2070ed206ce88c6-k8s-certs\") pod \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" (UID: \"722edc8d2e4f3474c2070ed206ce88c6\") " pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.964987 kubelet[2886]: I0913 02:41:28.964961 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/722edc8d2e4f3474c2070ed206ce88c6-usr-share-ca-certificates\") pod \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" (UID: \"722edc8d2e4f3474c2070ed206ce88c6\") " pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.965271 kubelet[2886]: I0913 02:41:28.965219 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-flexvolume-dir\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.965748 kubelet[2886]: I0913 02:41:28.965653 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c13c72d647fc5f33c5c96c4238a7b01-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-m9tmw.gb1.brightbox.com\" (UID: \"9c13c72d647fc5f33c5c96c4238a7b01\") " pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.965999 kubelet[2886]: I0913 02:41:28.965726 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bd4f808bb32d82da17c702d57d4b685-kubeconfig\") pod \"kube-scheduler-srv-m9tmw.gb1.brightbox.com\" (UID: \"5bd4f808bb32d82da17c702d57d4b685\") " pod="kube-system/kube-scheduler-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.966288 kubelet[2886]: I0913 02:41:28.965894 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/722edc8d2e4f3474c2070ed206ce88c6-ca-certs\") pod \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" (UID: \"722edc8d2e4f3474c2070ed206ce88c6\") " pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.975282 kubelet[2886]: I0913 02:41:28.975114 2886 kubelet_node_status.go:75] "Attempting to register node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.987402 kubelet[2886]: I0913 02:41:28.987337 2886 kubelet_node_status.go:124] "Node was previously registered" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:28.988051 kubelet[2886]: I0913 02:41:28.987922 2886 kubelet_node_status.go:78] "Successfully registered node" node="srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:29.633191 kubelet[2886]: I0913 02:41:29.633081 2886 apiserver.go:52] "Watching apiserver" Sep 13 02:41:29.658157 kubelet[2886]: I0913 02:41:29.658109 2886 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 02:41:29.788687 kubelet[2886]: I0913 02:41:29.788599 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:29.798396 kubelet[2886]: I0913 02:41:29.797625 2886 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:41:29.798396 kubelet[2886]: E0913 02:41:29.797692 2886 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-m9tmw.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" Sep 13 02:41:29.826540 kubelet[2886]: I0913 02:41:29.826440 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-m9tmw.gb1.brightbox.com" podStartSLOduration=1.826407312 podStartE2EDuration="1.826407312s" podCreationTimestamp="2025-09-13 02:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:41:29.826016163 +0000 UTC m=+1.347444506" watchObservedRunningTime="2025-09-13 02:41:29.826407312 +0000 UTC m=+1.347835655" Sep 13 02:41:29.854586 kubelet[2886]: I0913 02:41:29.854523 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-m9tmw.gb1.brightbox.com" podStartSLOduration=1.854504351 podStartE2EDuration="1.854504351s" podCreationTimestamp="2025-09-13 02:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:41:29.837816169 +0000 UTC m=+1.359244517" watchObservedRunningTime="2025-09-13 02:41:29.854504351 +0000 UTC m=+1.375932686" Sep 13 02:41:29.857053 kubelet[2886]: I0913 02:41:29.855198 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-m9tmw.gb1.brightbox.com" podStartSLOduration=1.855187939 podStartE2EDuration="1.855187939s" podCreationTimestamp="2025-09-13 02:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:41:29.853322746 +0000 UTC m=+1.374751092" watchObservedRunningTime="2025-09-13 02:41:29.855187939 +0000 UTC m=+1.376616299" Sep 13 02:41:32.152811 kubelet[2886]: I0913 02:41:32.152712 2886 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 02:41:32.154561 kubelet[2886]: I0913 02:41:32.153570 2886 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 02:41:32.154674 containerd[1592]: time="2025-09-13T02:41:32.153301827Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 02:41:32.872556 systemd[1]: Created slice kubepods-besteffort-podbf4de5e1_219f_4607_a76c_711fab4bacd0.slice - libcontainer container kubepods-besteffort-podbf4de5e1_219f_4607_a76c_711fab4bacd0.slice. Sep 13 02:41:32.888855 kubelet[2886]: I0913 02:41:32.888627 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m858d\" (UniqueName: \"kubernetes.io/projected/bf4de5e1-219f-4607-a76c-711fab4bacd0-kube-api-access-m858d\") pod \"kube-proxy-brp8h\" (UID: \"bf4de5e1-219f-4607-a76c-711fab4bacd0\") " pod="kube-system/kube-proxy-brp8h" Sep 13 02:41:32.888855 kubelet[2886]: I0913 02:41:32.888689 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf4de5e1-219f-4607-a76c-711fab4bacd0-kube-proxy\") pod \"kube-proxy-brp8h\" (UID: \"bf4de5e1-219f-4607-a76c-711fab4bacd0\") " pod="kube-system/kube-proxy-brp8h" Sep 13 02:41:32.888855 kubelet[2886]: I0913 02:41:32.888719 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf4de5e1-219f-4607-a76c-711fab4bacd0-xtables-lock\") pod \"kube-proxy-brp8h\" (UID: \"bf4de5e1-219f-4607-a76c-711fab4bacd0\") " pod="kube-system/kube-proxy-brp8h" Sep 13 02:41:32.888855 kubelet[2886]: I0913 02:41:32.888759 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4de5e1-219f-4607-a76c-711fab4bacd0-lib-modules\") pod \"kube-proxy-brp8h\" (UID: \"bf4de5e1-219f-4607-a76c-711fab4bacd0\") " pod="kube-system/kube-proxy-brp8h" Sep 13 02:41:32.999488 kubelet[2886]: E0913 02:41:32.999409 2886 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 02:41:32.999679 kubelet[2886]: E0913 02:41:32.999505 2886 projected.go:194] Error preparing data for projected volume kube-api-access-m858d for pod kube-system/kube-proxy-brp8h: configmap "kube-root-ca.crt" not found Sep 13 02:41:32.999679 kubelet[2886]: E0913 02:41:32.999643 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf4de5e1-219f-4607-a76c-711fab4bacd0-kube-api-access-m858d podName:bf4de5e1-219f-4607-a76c-711fab4bacd0 nodeName:}" failed. No retries permitted until 2025-09-13 02:41:33.49959191 +0000 UTC m=+5.021020246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m858d" (UniqueName: "kubernetes.io/projected/bf4de5e1-219f-4607-a76c-711fab4bacd0-kube-api-access-m858d") pod "kube-proxy-brp8h" (UID: "bf4de5e1-219f-4607-a76c-711fab4bacd0") : configmap "kube-root-ca.crt" not found Sep 13 02:41:33.396988 systemd[1]: Created slice kubepods-besteffort-pod48ebe647_a655_4b81_9c28_4476993072c0.slice - libcontainer container kubepods-besteffort-pod48ebe647_a655_4b81_9c28_4476993072c0.slice. Sep 13 02:41:33.492930 kubelet[2886]: I0913 02:41:33.492806 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48ebe647-a655-4b81-9c28-4476993072c0-var-lib-calico\") pod \"tigera-operator-755d956888-kj6zz\" (UID: \"48ebe647-a655-4b81-9c28-4476993072c0\") " pod="tigera-operator/tigera-operator-755d956888-kj6zz" Sep 13 02:41:33.492930 kubelet[2886]: I0913 02:41:33.492872 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2hn\" (UniqueName: \"kubernetes.io/projected/48ebe647-a655-4b81-9c28-4476993072c0-kube-api-access-df2hn\") pod \"tigera-operator-755d956888-kj6zz\" (UID: \"48ebe647-a655-4b81-9c28-4476993072c0\") " pod="tigera-operator/tigera-operator-755d956888-kj6zz" Sep 13 02:41:33.702777 containerd[1592]: time="2025-09-13T02:41:33.702206440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kj6zz,Uid:48ebe647-a655-4b81-9c28-4476993072c0,Namespace:tigera-operator,Attempt:0,}" Sep 13 02:41:33.744597 containerd[1592]: time="2025-09-13T02:41:33.744524486Z" level=info msg="connecting to shim 0b8ed9b6403a8a39dd093577ee65ba08bcba8266fe4913525500218669ec3c76" address="unix:///run/containerd/s/4a3e11776e74ecbd6be90361729541ca7d27674b8c72fd567013d6609c08c117" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:33.785071 containerd[1592]: time="2025-09-13T02:41:33.784997811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-brp8h,Uid:bf4de5e1-219f-4607-a76c-711fab4bacd0,Namespace:kube-system,Attempt:0,}" Sep 13 02:41:33.792523 systemd[1]: Started cri-containerd-0b8ed9b6403a8a39dd093577ee65ba08bcba8266fe4913525500218669ec3c76.scope - libcontainer container 0b8ed9b6403a8a39dd093577ee65ba08bcba8266fe4913525500218669ec3c76. Sep 13 02:41:33.828741 containerd[1592]: time="2025-09-13T02:41:33.828676271Z" level=info msg="connecting to shim 5e3141d1dec60faf3df0d52eefe7ebe85a71ac9ef922ed50f940499b287bf8cb" address="unix:///run/containerd/s/aeccec4eee54ab5856e488bc3ba59aa33b3332829e525e6f5feb2df4ce2eb7c2" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:33.888470 systemd[1]: Started cri-containerd-5e3141d1dec60faf3df0d52eefe7ebe85a71ac9ef922ed50f940499b287bf8cb.scope - libcontainer container 5e3141d1dec60faf3df0d52eefe7ebe85a71ac9ef922ed50f940499b287bf8cb. Sep 13 02:41:33.924403 containerd[1592]: time="2025-09-13T02:41:33.924290670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kj6zz,Uid:48ebe647-a655-4b81-9c28-4476993072c0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b8ed9b6403a8a39dd093577ee65ba08bcba8266fe4913525500218669ec3c76\"" Sep 13 02:41:33.929233 containerd[1592]: time="2025-09-13T02:41:33.929188321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 02:41:33.955084 containerd[1592]: time="2025-09-13T02:41:33.953914409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-brp8h,Uid:bf4de5e1-219f-4607-a76c-711fab4bacd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e3141d1dec60faf3df0d52eefe7ebe85a71ac9ef922ed50f940499b287bf8cb\"" Sep 13 02:41:33.962577 containerd[1592]: time="2025-09-13T02:41:33.962539267Z" level=info msg="CreateContainer within sandbox \"5e3141d1dec60faf3df0d52eefe7ebe85a71ac9ef922ed50f940499b287bf8cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 02:41:33.982719 containerd[1592]: time="2025-09-13T02:41:33.982538554Z" level=info msg="Container 1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:33.990039 containerd[1592]: time="2025-09-13T02:41:33.989976865Z" level=info msg="CreateContainer within sandbox \"5e3141d1dec60faf3df0d52eefe7ebe85a71ac9ef922ed50f940499b287bf8cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8\"" Sep 13 02:41:33.992089 containerd[1592]: time="2025-09-13T02:41:33.991005160Z" level=info msg="StartContainer for \"1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8\"" Sep 13 02:41:33.992922 containerd[1592]: time="2025-09-13T02:41:33.992849358Z" level=info msg="connecting to shim 1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8" address="unix:///run/containerd/s/aeccec4eee54ab5856e488bc3ba59aa33b3332829e525e6f5feb2df4ce2eb7c2" protocol=ttrpc version=3 Sep 13 02:41:34.019240 systemd[1]: Started cri-containerd-1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8.scope - libcontainer container 1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8. Sep 13 02:41:34.103623 containerd[1592]: time="2025-09-13T02:41:34.103517104Z" level=info msg="StartContainer for \"1a61b7e1bf609409760b5663502fb69cfe05b4ccda1589c5d9e469ba4fc150b8\" returns successfully" Sep 13 02:41:35.725546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229942401.mount: Deactivated successfully. Sep 13 02:41:37.135397 containerd[1592]: time="2025-09-13T02:41:37.135322519Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:37.137047 containerd[1592]: time="2025-09-13T02:41:37.136839095Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 02:41:37.139453 containerd[1592]: time="2025-09-13T02:41:37.137790549Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:37.140762 containerd[1592]: time="2025-09-13T02:41:37.140724466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:37.142154 containerd[1592]: time="2025-09-13T02:41:37.141802822Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.212567848s" Sep 13 02:41:37.142154 containerd[1592]: time="2025-09-13T02:41:37.141851819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 02:41:37.147965 containerd[1592]: time="2025-09-13T02:41:37.147894259Z" level=info msg="CreateContainer within sandbox \"0b8ed9b6403a8a39dd093577ee65ba08bcba8266fe4913525500218669ec3c76\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 02:41:37.159218 containerd[1592]: time="2025-09-13T02:41:37.158593582Z" level=info msg="Container 6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:37.162301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922101591.mount: Deactivated successfully. Sep 13 02:41:37.168240 containerd[1592]: time="2025-09-13T02:41:37.168204828Z" level=info msg="CreateContainer within sandbox \"0b8ed9b6403a8a39dd093577ee65ba08bcba8266fe4913525500218669ec3c76\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770\"" Sep 13 02:41:37.169460 containerd[1592]: time="2025-09-13T02:41:37.169419171Z" level=info msg="StartContainer for \"6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770\"" Sep 13 02:41:37.171507 containerd[1592]: time="2025-09-13T02:41:37.171456912Z" level=info msg="connecting to shim 6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770" address="unix:///run/containerd/s/4a3e11776e74ecbd6be90361729541ca7d27674b8c72fd567013d6609c08c117" protocol=ttrpc version=3 Sep 13 02:41:37.210246 systemd[1]: Started cri-containerd-6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770.scope - libcontainer container 6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770. Sep 13 02:41:37.260022 containerd[1592]: time="2025-09-13T02:41:37.259932918Z" level=info msg="StartContainer for \"6eb67c38b3461c1aebf94b7bd35d5224d08539954ec9ee5d3c4e88b638d1d770\" returns successfully" Sep 13 02:41:37.842055 kubelet[2886]: I0913 02:41:37.841496 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-brp8h" podStartSLOduration=5.841474663 podStartE2EDuration="5.841474663s" podCreationTimestamp="2025-09-13 02:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:41:34.826021364 +0000 UTC m=+6.347449715" watchObservedRunningTime="2025-09-13 02:41:37.841474663 +0000 UTC m=+9.362903009" Sep 13 02:41:37.843256 kubelet[2886]: I0913 02:41:37.843055 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-kj6zz" podStartSLOduration=1.6276337760000001 podStartE2EDuration="4.842993805s" podCreationTimestamp="2025-09-13 02:41:33 +0000 UTC" firstStartedPulling="2025-09-13 02:41:33.928475862 +0000 UTC m=+5.449904192" lastFinishedPulling="2025-09-13 02:41:37.143835885 +0000 UTC m=+8.665264221" observedRunningTime="2025-09-13 02:41:37.84133949 +0000 UTC m=+9.362767837" watchObservedRunningTime="2025-09-13 02:41:37.842993805 +0000 UTC m=+9.364422153" Sep 13 02:41:44.711002 sudo[1895]: pam_unix(sudo:session): session closed for user root Sep 13 02:41:44.859062 sshd[1894]: Connection closed by 139.178.89.65 port 40314 Sep 13 02:41:44.859911 sshd-session[1892]: pam_unix(sshd:session): session closed for user core Sep 13 02:41:44.871744 systemd[1]: sshd@8-10.230.23.130:22-139.178.89.65:40314.service: Deactivated successfully. Sep 13 02:41:44.880340 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 02:41:44.880680 systemd[1]: session-11.scope: Consumed 9.121s CPU time, 153.7M memory peak. Sep 13 02:41:44.888358 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Sep 13 02:41:44.894971 systemd-logind[1564]: Removed session 11. Sep 13 02:41:50.095424 systemd[1]: Created slice kubepods-besteffort-pod62ac0fbd_c4f4_44e5_9475_ce9425fce2ab.slice - libcontainer container kubepods-besteffort-pod62ac0fbd_c4f4_44e5_9475_ce9425fce2ab.slice. Sep 13 02:41:50.200831 kubelet[2886]: I0913 02:41:50.200769 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvpcr\" (UniqueName: \"kubernetes.io/projected/62ac0fbd-c4f4-44e5-9475-ce9425fce2ab-kube-api-access-qvpcr\") pod \"calico-typha-f85cb8545-sbr6n\" (UID: \"62ac0fbd-c4f4-44e5-9475-ce9425fce2ab\") " pod="calico-system/calico-typha-f85cb8545-sbr6n" Sep 13 02:41:50.200831 kubelet[2886]: I0913 02:41:50.200841 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/62ac0fbd-c4f4-44e5-9475-ce9425fce2ab-typha-certs\") pod \"calico-typha-f85cb8545-sbr6n\" (UID: \"62ac0fbd-c4f4-44e5-9475-ce9425fce2ab\") " pod="calico-system/calico-typha-f85cb8545-sbr6n" Sep 13 02:41:50.201880 kubelet[2886]: I0913 02:41:50.200875 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ac0fbd-c4f4-44e5-9475-ce9425fce2ab-tigera-ca-bundle\") pod \"calico-typha-f85cb8545-sbr6n\" (UID: \"62ac0fbd-c4f4-44e5-9475-ce9425fce2ab\") " pod="calico-system/calico-typha-f85cb8545-sbr6n" Sep 13 02:41:50.284930 systemd[1]: Created slice kubepods-besteffort-pod31bb004d_c1ad_4a81_84e6_426182a0687f.slice - libcontainer container kubepods-besteffort-pod31bb004d_c1ad_4a81_84e6_426182a0687f.slice. Sep 13 02:41:50.402085 kubelet[2886]: I0913 02:41:50.401546 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-policysync\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402085 kubelet[2886]: I0913 02:41:50.401656 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-cni-bin-dir\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402085 kubelet[2886]: I0913 02:41:50.401689 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4xk\" (UniqueName: \"kubernetes.io/projected/31bb004d-c1ad-4a81-84e6-426182a0687f-kube-api-access-qg4xk\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402085 kubelet[2886]: I0913 02:41:50.401754 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bb004d-c1ad-4a81-84e6-426182a0687f-tigera-ca-bundle\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402085 kubelet[2886]: I0913 02:41:50.401800 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-cni-net-dir\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402620 kubelet[2886]: I0913 02:41:50.401827 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-var-lib-calico\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402620 kubelet[2886]: I0913 02:41:50.401875 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-flexvol-driver-host\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.402996 kubelet[2886]: I0913 02:41:50.402849 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-var-run-calico\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.408714 kubelet[2886]: I0913 02:41:50.408324 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-cni-log-dir\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.408714 kubelet[2886]: I0913 02:41:50.408427 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-lib-modules\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.408714 kubelet[2886]: I0913 02:41:50.408464 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31bb004d-c1ad-4a81-84e6-426182a0687f-xtables-lock\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.408714 kubelet[2886]: I0913 02:41:50.408513 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31bb004d-c1ad-4a81-84e6-426182a0687f-node-certs\") pod \"calico-node-zkx8h\" (UID: \"31bb004d-c1ad-4a81-84e6-426182a0687f\") " pod="calico-system/calico-node-zkx8h" Sep 13 02:41:50.408976 containerd[1592]: time="2025-09-13T02:41:50.408551655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f85cb8545-sbr6n,Uid:62ac0fbd-c4f4-44e5-9475-ce9425fce2ab,Namespace:calico-system,Attempt:0,}" Sep 13 02:41:50.462823 containerd[1592]: time="2025-09-13T02:41:50.462343899Z" level=info msg="connecting to shim 883e16876a6f444d263be2cca6ad5c0f272768d43ddf0b3b7329e18655ee3ede" address="unix:///run/containerd/s/381ae2b81b51c696fb60e2ef9c57dca66a9bba80cfe51ec036474540a3de6b1c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:50.530060 kubelet[2886]: E0913 02:41:50.527875 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.530060 kubelet[2886]: W0913 02:41:50.527915 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.530060 kubelet[2886]: E0913 02:41:50.527970 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.556724 systemd[1]: Started cri-containerd-883e16876a6f444d263be2cca6ad5c0f272768d43ddf0b3b7329e18655ee3ede.scope - libcontainer container 883e16876a6f444d263be2cca6ad5c0f272768d43ddf0b3b7329e18655ee3ede. Sep 13 02:41:50.565496 kubelet[2886]: E0913 02:41:50.565332 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.565496 kubelet[2886]: W0913 02:41:50.565492 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.565996 kubelet[2886]: E0913 02:41:50.565671 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.599577 containerd[1592]: time="2025-09-13T02:41:50.599521014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkx8h,Uid:31bb004d-c1ad-4a81-84e6-426182a0687f,Namespace:calico-system,Attempt:0,}" Sep 13 02:41:50.654804 containerd[1592]: time="2025-09-13T02:41:50.653310097Z" level=info msg="connecting to shim 86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d" address="unix:///run/containerd/s/3b233c501c439c6b11096629d3b6065119fb6c451dd0498fe6df88b6b913d386" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:41:50.710307 systemd[1]: Started cri-containerd-86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d.scope - libcontainer container 86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d. Sep 13 02:41:50.725185 kubelet[2886]: E0913 02:41:50.724589 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:41:50.816573 kubelet[2886]: E0913 02:41:50.816434 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.816774 kubelet[2886]: W0913 02:41:50.816476 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.816774 kubelet[2886]: E0913 02:41:50.816718 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.818305 kubelet[2886]: E0913 02:41:50.818145 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.818305 kubelet[2886]: W0913 02:41:50.818172 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.818305 kubelet[2886]: E0913 02:41:50.818190 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.819975 kubelet[2886]: E0913 02:41:50.819750 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.819975 kubelet[2886]: W0913 02:41:50.819771 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.819975 kubelet[2886]: E0913 02:41:50.819789 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.820580 kubelet[2886]: E0913 02:41:50.820492 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.820737 kubelet[2886]: W0913 02:41:50.820511 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.820737 kubelet[2886]: E0913 02:41:50.820725 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.821737 kubelet[2886]: E0913 02:41:50.821566 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.821737 kubelet[2886]: W0913 02:41:50.821586 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.821737 kubelet[2886]: E0913 02:41:50.821602 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.822563 kubelet[2886]: E0913 02:41:50.822535 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.822695 kubelet[2886]: W0913 02:41:50.822559 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.822764 kubelet[2886]: E0913 02:41:50.822700 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.824102 kubelet[2886]: E0913 02:41:50.824076 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.824102 kubelet[2886]: W0913 02:41:50.824101 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.824429 kubelet[2886]: E0913 02:41:50.824118 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.824429 kubelet[2886]: E0913 02:41:50.824392 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.824429 kubelet[2886]: W0913 02:41:50.824407 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.824429 kubelet[2886]: E0913 02:41:50.824422 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.825153 kubelet[2886]: E0913 02:41:50.825130 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.825153 kubelet[2886]: W0913 02:41:50.825150 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.825303 kubelet[2886]: E0913 02:41:50.825166 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.825950 kubelet[2886]: E0913 02:41:50.825864 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.825950 kubelet[2886]: W0913 02:41:50.825884 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.825950 kubelet[2886]: E0913 02:41:50.825932 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.827289 kubelet[2886]: E0913 02:41:50.827256 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.827289 kubelet[2886]: W0913 02:41:50.827276 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.827430 kubelet[2886]: E0913 02:41:50.827293 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.829125 kubelet[2886]: E0913 02:41:50.828847 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.829198 kubelet[2886]: W0913 02:41:50.829132 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.829198 kubelet[2886]: E0913 02:41:50.829154 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.830285 kubelet[2886]: E0913 02:41:50.830262 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.830601 kubelet[2886]: W0913 02:41:50.830572 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.830688 kubelet[2886]: E0913 02:41:50.830600 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.831690 kubelet[2886]: E0913 02:41:50.831652 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.831690 kubelet[2886]: W0913 02:41:50.831674 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.831690 kubelet[2886]: E0913 02:41:50.831692 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.832435 kubelet[2886]: E0913 02:41:50.832412 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.832435 kubelet[2886]: W0913 02:41:50.832432 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.832632 kubelet[2886]: E0913 02:41:50.832448 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.834450 kubelet[2886]: E0913 02:41:50.834426 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.834450 kubelet[2886]: W0913 02:41:50.834446 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.834594 kubelet[2886]: E0913 02:41:50.834467 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.835139 kubelet[2886]: E0913 02:41:50.835049 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.835216 kubelet[2886]: W0913 02:41:50.835081 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.835280 kubelet[2886]: E0913 02:41:50.835227 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.836295 kubelet[2886]: E0913 02:41:50.836022 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.836295 kubelet[2886]: W0913 02:41:50.836286 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.836453 kubelet[2886]: E0913 02:41:50.836306 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.837260 kubelet[2886]: E0913 02:41:50.837197 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.837696 kubelet[2886]: W0913 02:41:50.837222 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.837981 kubelet[2886]: E0913 02:41:50.837701 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.838679 kubelet[2886]: E0913 02:41:50.838433 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.839076 kubelet[2886]: W0913 02:41:50.839046 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.839191 kubelet[2886]: E0913 02:41:50.839104 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.839871 containerd[1592]: time="2025-09-13T02:41:50.839811940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkx8h,Uid:31bb004d-c1ad-4a81-84e6-426182a0687f,Namespace:calico-system,Attempt:0,} returns sandbox id \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\"" Sep 13 02:41:50.849465 containerd[1592]: time="2025-09-13T02:41:50.849401026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 02:41:50.916204 kubelet[2886]: E0913 02:41:50.915534 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.916204 kubelet[2886]: W0913 02:41:50.915569 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.916204 kubelet[2886]: E0913 02:41:50.915644 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.916204 kubelet[2886]: I0913 02:41:50.915698 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lbk\" (UniqueName: \"kubernetes.io/projected/fa4dd181-bd58-4b28-95b9-780c8fc5de89-kube-api-access-j7lbk\") pod \"csi-node-driver-gw4fm\" (UID: \"fa4dd181-bd58-4b28-95b9-780c8fc5de89\") " pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:41:50.918399 kubelet[2886]: E0913 02:41:50.917170 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.918399 kubelet[2886]: W0913 02:41:50.917188 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.918399 kubelet[2886]: E0913 02:41:50.917221 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.918399 kubelet[2886]: I0913 02:41:50.917251 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fa4dd181-bd58-4b28-95b9-780c8fc5de89-varrun\") pod \"csi-node-driver-gw4fm\" (UID: \"fa4dd181-bd58-4b28-95b9-780c8fc5de89\") " pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:41:50.918399 kubelet[2886]: E0913 02:41:50.917493 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.918399 kubelet[2886]: W0913 02:41:50.917508 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.918399 kubelet[2886]: E0913 02:41:50.917523 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.918399 kubelet[2886]: I0913 02:41:50.917576 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4dd181-bd58-4b28-95b9-780c8fc5de89-kubelet-dir\") pod \"csi-node-driver-gw4fm\" (UID: \"fa4dd181-bd58-4b28-95b9-780c8fc5de89\") " pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:41:50.918399 kubelet[2886]: E0913 02:41:50.917824 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.918852 kubelet[2886]: W0913 02:41:50.917839 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.918852 kubelet[2886]: E0913 02:41:50.917857 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.918852 kubelet[2886]: I0913 02:41:50.917881 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa4dd181-bd58-4b28-95b9-780c8fc5de89-registration-dir\") pod \"csi-node-driver-gw4fm\" (UID: \"fa4dd181-bd58-4b28-95b9-780c8fc5de89\") " pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:41:50.922859 kubelet[2886]: E0913 02:41:50.920313 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.922859 kubelet[2886]: W0913 02:41:50.920336 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.922859 kubelet[2886]: E0913 02:41:50.920356 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.923638 kubelet[2886]: E0913 02:41:50.923403 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.923638 kubelet[2886]: W0913 02:41:50.923425 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.923638 kubelet[2886]: E0913 02:41:50.923443 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.924799 kubelet[2886]: E0913 02:41:50.924600 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.924799 kubelet[2886]: W0913 02:41:50.924632 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.924799 kubelet[2886]: E0913 02:41:50.924652 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.925590 kubelet[2886]: E0913 02:41:50.925310 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.925590 kubelet[2886]: W0913 02:41:50.925330 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.925590 kubelet[2886]: E0913 02:41:50.925346 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.925795 kubelet[2886]: I0913 02:41:50.925732 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa4dd181-bd58-4b28-95b9-780c8fc5de89-socket-dir\") pod \"csi-node-driver-gw4fm\" (UID: \"fa4dd181-bd58-4b28-95b9-780c8fc5de89\") " pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:41:50.926986 kubelet[2886]: E0913 02:41:50.926963 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.927280 kubelet[2886]: W0913 02:41:50.927104 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.927280 kubelet[2886]: E0913 02:41:50.927130 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.927509 kubelet[2886]: E0913 02:41:50.927488 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.927608 kubelet[2886]: W0913 02:41:50.927587 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.927892 kubelet[2886]: E0913 02:41:50.927719 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.928077 kubelet[2886]: E0913 02:41:50.928057 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.928171 kubelet[2886]: W0913 02:41:50.928151 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.928289 kubelet[2886]: E0913 02:41:50.928269 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.928670 kubelet[2886]: E0913 02:41:50.928578 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.928670 kubelet[2886]: W0913 02:41:50.928598 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.928670 kubelet[2886]: E0913 02:41:50.928613 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.929381 kubelet[2886]: E0913 02:41:50.929356 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.929381 kubelet[2886]: W0913 02:41:50.929378 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.929514 kubelet[2886]: E0913 02:41:50.929398 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.931807 kubelet[2886]: E0913 02:41:50.931772 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.931927 kubelet[2886]: W0913 02:41:50.931899 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.931927 kubelet[2886]: E0913 02:41:50.931923 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.932705 kubelet[2886]: E0913 02:41:50.932682 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:50.932705 kubelet[2886]: W0913 02:41:50.932702 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:50.932835 kubelet[2886]: E0913 02:41:50.932719 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:50.934540 containerd[1592]: time="2025-09-13T02:41:50.934485682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f85cb8545-sbr6n,Uid:62ac0fbd-c4f4-44e5-9475-ce9425fce2ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"883e16876a6f444d263be2cca6ad5c0f272768d43ddf0b3b7329e18655ee3ede\"" Sep 13 02:41:51.027563 kubelet[2886]: E0913 02:41:51.027508 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.027563 kubelet[2886]: W0913 02:41:51.027539 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.027563 kubelet[2886]: E0913 02:41:51.027567 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.028489 kubelet[2886]: E0913 02:41:51.027963 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.028489 kubelet[2886]: W0913 02:41:51.027978 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.028489 kubelet[2886]: E0913 02:41:51.027993 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.028489 kubelet[2886]: E0913 02:41:51.028358 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.028489 kubelet[2886]: W0913 02:41:51.028374 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.028489 kubelet[2886]: E0913 02:41:51.028389 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.029784 kubelet[2886]: E0913 02:41:51.028796 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.029784 kubelet[2886]: W0913 02:41:51.028811 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.029784 kubelet[2886]: E0913 02:41:51.028827 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.029784 kubelet[2886]: E0913 02:41:51.029283 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.029784 kubelet[2886]: W0913 02:41:51.029298 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.029784 kubelet[2886]: E0913 02:41:51.029314 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.030856 kubelet[2886]: E0913 02:41:51.030001 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.030856 kubelet[2886]: W0913 02:41:51.030020 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.030856 kubelet[2886]: E0913 02:41:51.030060 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.030856 kubelet[2886]: E0913 02:41:51.030355 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.030856 kubelet[2886]: W0913 02:41:51.030369 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.030856 kubelet[2886]: E0913 02:41:51.030384 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.032046 kubelet[2886]: E0913 02:41:51.031289 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.032046 kubelet[2886]: W0913 02:41:51.031304 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.032046 kubelet[2886]: E0913 02:41:51.031319 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.032046 kubelet[2886]: E0913 02:41:51.031573 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.032046 kubelet[2886]: W0913 02:41:51.031587 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.032046 kubelet[2886]: E0913 02:41:51.031601 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.032046 kubelet[2886]: E0913 02:41:51.031882 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.032046 kubelet[2886]: W0913 02:41:51.031896 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.032046 kubelet[2886]: E0913 02:41:51.031920 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.033339 kubelet[2886]: E0913 02:41:51.032262 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.033339 kubelet[2886]: W0913 02:41:51.032276 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.033339 kubelet[2886]: E0913 02:41:51.032291 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.033339 kubelet[2886]: E0913 02:41:51.032937 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.033339 kubelet[2886]: W0913 02:41:51.032993 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.033339 kubelet[2886]: E0913 02:41:51.033012 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.034429 kubelet[2886]: E0913 02:41:51.033857 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.034429 kubelet[2886]: W0913 02:41:51.033879 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.034429 kubelet[2886]: E0913 02:41:51.033895 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.034930 kubelet[2886]: E0913 02:41:51.034652 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.034930 kubelet[2886]: W0913 02:41:51.034674 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.034930 kubelet[2886]: E0913 02:41:51.034694 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.035495 kubelet[2886]: E0913 02:41:51.035475 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.035744 kubelet[2886]: W0913 02:41:51.035676 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.035744 kubelet[2886]: E0913 02:41:51.035700 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.036414 kubelet[2886]: E0913 02:41:51.036387 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.036894 kubelet[2886]: W0913 02:41:51.036513 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.036894 kubelet[2886]: E0913 02:41:51.036572 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.038196 kubelet[2886]: E0913 02:41:51.038166 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.038196 kubelet[2886]: W0913 02:41:51.038188 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.038327 kubelet[2886]: E0913 02:41:51.038205 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.038529 kubelet[2886]: E0913 02:41:51.038493 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.038529 kubelet[2886]: W0913 02:41:51.038521 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.038671 kubelet[2886]: E0913 02:41:51.038539 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.038869 kubelet[2886]: E0913 02:41:51.038841 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.038869 kubelet[2886]: W0913 02:41:51.038862 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.038969 kubelet[2886]: E0913 02:41:51.038878 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.039232 kubelet[2886]: E0913 02:41:51.039203 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.039508 kubelet[2886]: W0913 02:41:51.039456 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.039508 kubelet[2886]: E0913 02:41:51.039484 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.039882 kubelet[2886]: E0913 02:41:51.039858 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.039882 kubelet[2886]: W0913 02:41:51.039878 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.040282 kubelet[2886]: E0913 02:41:51.039894 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.040282 kubelet[2886]: E0913 02:41:51.040246 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.040282 kubelet[2886]: W0913 02:41:51.040259 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.040282 kubelet[2886]: E0913 02:41:51.040274 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.040846 kubelet[2886]: E0913 02:41:51.040563 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.040846 kubelet[2886]: W0913 02:41:51.040583 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.040846 kubelet[2886]: E0913 02:41:51.040599 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.041837 kubelet[2886]: E0913 02:41:51.041813 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.041837 kubelet[2886]: W0913 02:41:51.041834 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.042109 kubelet[2886]: E0913 02:41:51.041851 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.042383 kubelet[2886]: E0913 02:41:51.042349 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.042383 kubelet[2886]: W0913 02:41:51.042370 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.042617 kubelet[2886]: E0913 02:41:51.042386 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:51.059668 kubelet[2886]: E0913 02:41:51.059605 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 02:41:51.059668 kubelet[2886]: W0913 02:41:51.059658 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 02:41:51.060084 kubelet[2886]: E0913 02:41:51.059686 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 02:41:52.445450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466652895.mount: Deactivated successfully. Sep 13 02:41:52.714151 containerd[1592]: time="2025-09-13T02:41:52.712225442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:52.714151 containerd[1592]: time="2025-09-13T02:41:52.713470490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 13 02:41:52.715638 containerd[1592]: time="2025-09-13T02:41:52.715237008Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:52.718869 containerd[1592]: time="2025-09-13T02:41:52.718827613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:52.721916 containerd[1592]: time="2025-09-13T02:41:52.721867535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.871645407s" Sep 13 02:41:52.722000 containerd[1592]: time="2025-09-13T02:41:52.721921796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 02:41:52.725166 containerd[1592]: time="2025-09-13T02:41:52.724705960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 02:41:52.728753 containerd[1592]: time="2025-09-13T02:41:52.728720203Z" level=info msg="CreateContainer within sandbox \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 02:41:52.743761 kubelet[2886]: E0913 02:41:52.743107 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:41:52.751582 containerd[1592]: time="2025-09-13T02:41:52.751441549Z" level=info msg="Container 22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:52.760286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547671032.mount: Deactivated successfully. Sep 13 02:41:52.783815 containerd[1592]: time="2025-09-13T02:41:52.783761321Z" level=info msg="CreateContainer within sandbox \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\"" Sep 13 02:41:52.785965 containerd[1592]: time="2025-09-13T02:41:52.785243375Z" level=info msg="StartContainer for \"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\"" Sep 13 02:41:52.805800 containerd[1592]: time="2025-09-13T02:41:52.805441445Z" level=info msg="connecting to shim 22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220" address="unix:///run/containerd/s/3b233c501c439c6b11096629d3b6065119fb6c451dd0498fe6df88b6b913d386" protocol=ttrpc version=3 Sep 13 02:41:52.861268 systemd[1]: Started cri-containerd-22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220.scope - libcontainer container 22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220. Sep 13 02:41:53.069448 containerd[1592]: time="2025-09-13T02:41:53.069392573Z" level=info msg="StartContainer for \"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\" returns successfully" Sep 13 02:41:53.100668 systemd[1]: cri-containerd-22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220.scope: Deactivated successfully. Sep 13 02:41:53.151973 containerd[1592]: time="2025-09-13T02:41:53.147808215Z" level=info msg="received exit event container_id:\"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\" id:\"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\" pid:3486 exited_at:{seconds:1757731313 nanos:107669871}" Sep 13 02:41:53.175233 containerd[1592]: time="2025-09-13T02:41:53.175143186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\" id:\"22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220\" pid:3486 exited_at:{seconds:1757731313 nanos:107669871}" Sep 13 02:41:53.370130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22bb3643fae508613db69dc07d111da46f4ab5a400a31611e14031dce8dbc220-rootfs.mount: Deactivated successfully. Sep 13 02:41:54.741514 kubelet[2886]: E0913 02:41:54.740920 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:41:55.858640 containerd[1592]: time="2025-09-13T02:41:55.858571889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:55.860085 containerd[1592]: time="2025-09-13T02:41:55.860054473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 13 02:41:55.861215 containerd[1592]: time="2025-09-13T02:41:55.861103507Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:55.864876 containerd[1592]: time="2025-09-13T02:41:55.863726988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:41:55.864876 containerd[1592]: time="2025-09-13T02:41:55.864746331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.139997933s" Sep 13 02:41:55.864876 containerd[1592]: time="2025-09-13T02:41:55.864780534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 02:41:55.866697 containerd[1592]: time="2025-09-13T02:41:55.866657960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 02:41:55.889319 containerd[1592]: time="2025-09-13T02:41:55.889261699Z" level=info msg="CreateContainer within sandbox \"883e16876a6f444d263be2cca6ad5c0f272768d43ddf0b3b7329e18655ee3ede\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 02:41:55.905830 containerd[1592]: time="2025-09-13T02:41:55.905745520Z" level=info msg="Container 84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:41:55.918422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391057886.mount: Deactivated successfully. Sep 13 02:41:55.925580 containerd[1592]: time="2025-09-13T02:41:55.925473673Z" level=info msg="CreateContainer within sandbox \"883e16876a6f444d263be2cca6ad5c0f272768d43ddf0b3b7329e18655ee3ede\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358\"" Sep 13 02:41:55.934146 containerd[1592]: time="2025-09-13T02:41:55.932696944Z" level=info msg="StartContainer for \"84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358\"" Sep 13 02:41:55.935275 containerd[1592]: time="2025-09-13T02:41:55.935231066Z" level=info msg="connecting to shim 84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358" address="unix:///run/containerd/s/381ae2b81b51c696fb60e2ef9c57dca66a9bba80cfe51ec036474540a3de6b1c" protocol=ttrpc version=3 Sep 13 02:41:55.978244 systemd[1]: Started cri-containerd-84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358.scope - libcontainer container 84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358. Sep 13 02:41:56.068174 containerd[1592]: time="2025-09-13T02:41:56.068118779Z" level=info msg="StartContainer for \"84f3358aca43268215812d45b1747e29b6b75710fd12b3506e1ac4dd376c2358\" returns successfully" Sep 13 02:41:56.742061 kubelet[2886]: E0913 02:41:56.741574 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:41:56.951062 kubelet[2886]: I0913 02:41:56.950782 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f85cb8545-sbr6n" podStartSLOduration=2.021055755 podStartE2EDuration="6.950755979s" podCreationTimestamp="2025-09-13 02:41:50 +0000 UTC" firstStartedPulling="2025-09-13 02:41:50.936451595 +0000 UTC m=+22.457879924" lastFinishedPulling="2025-09-13 02:41:55.866151807 +0000 UTC m=+27.387580148" observedRunningTime="2025-09-13 02:41:56.949743042 +0000 UTC m=+28.471171401" watchObservedRunningTime="2025-09-13 02:41:56.950755979 +0000 UTC m=+28.472184327" Sep 13 02:41:57.939563 kubelet[2886]: I0913 02:41:57.939466 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 02:41:58.742066 kubelet[2886]: E0913 02:41:58.741828 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:42:00.740929 kubelet[2886]: E0913 02:42:00.740838 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:42:01.247798 containerd[1592]: time="2025-09-13T02:42:01.247714616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:01.249305 containerd[1592]: time="2025-09-13T02:42:01.249256066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 02:42:01.252140 containerd[1592]: time="2025-09-13T02:42:01.252086602Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:01.254691 containerd[1592]: time="2025-09-13T02:42:01.254631177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:01.256108 containerd[1592]: time="2025-09-13T02:42:01.256070536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.389369884s" Sep 13 02:42:01.256227 containerd[1592]: time="2025-09-13T02:42:01.256113710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 02:42:01.261533 containerd[1592]: time="2025-09-13T02:42:01.261495459Z" level=info msg="CreateContainer within sandbox \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 02:42:01.273065 containerd[1592]: time="2025-09-13T02:42:01.272644865Z" level=info msg="Container 792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:01.280071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386941902.mount: Deactivated successfully. Sep 13 02:42:01.293173 containerd[1592]: time="2025-09-13T02:42:01.293122595Z" level=info msg="CreateContainer within sandbox \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\"" Sep 13 02:42:01.293904 containerd[1592]: time="2025-09-13T02:42:01.293873793Z" level=info msg="StartContainer for \"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\"" Sep 13 02:42:01.298569 containerd[1592]: time="2025-09-13T02:42:01.298482738Z" level=info msg="connecting to shim 792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6" address="unix:///run/containerd/s/3b233c501c439c6b11096629d3b6065119fb6c451dd0498fe6df88b6b913d386" protocol=ttrpc version=3 Sep 13 02:42:01.337405 systemd[1]: Started cri-containerd-792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6.scope - libcontainer container 792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6. Sep 13 02:42:01.426100 containerd[1592]: time="2025-09-13T02:42:01.426008047Z" level=info msg="StartContainer for \"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\" returns successfully" Sep 13 02:42:02.404222 systemd[1]: cri-containerd-792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6.scope: Deactivated successfully. Sep 13 02:42:02.404690 systemd[1]: cri-containerd-792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6.scope: Consumed 806ms CPU time, 164.6M memory peak, 10.3M read from disk, 171.3M written to disk. Sep 13 02:42:02.520166 kubelet[2886]: I0913 02:42:02.519691 2886 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 02:42:02.604311 containerd[1592]: time="2025-09-13T02:42:02.603205036Z" level=info msg="received exit event container_id:\"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\" id:\"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\" pid:3587 exited_at:{seconds:1757731322 nanos:575080850}" Sep 13 02:42:02.605777 containerd[1592]: time="2025-09-13T02:42:02.605743624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\" id:\"792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6\" pid:3587 exited_at:{seconds:1757731322 nanos:575080850}" Sep 13 02:42:02.748377 systemd[1]: Created slice kubepods-burstable-pod39391357_6cd5_4640_876b_98d0772daef3.slice - libcontainer container kubepods-burstable-pod39391357_6cd5_4640_876b_98d0772daef3.slice. Sep 13 02:42:02.806612 systemd[1]: Created slice kubepods-burstable-pod33a25dab_4bd4_4636_89ad_04ed566fe785.slice - libcontainer container kubepods-burstable-pod33a25dab_4bd4_4636_89ad_04ed566fe785.slice. Sep 13 02:42:02.827801 kubelet[2886]: I0913 02:42:02.827697 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c490c5d-dffd-485c-b60c-9e71e8a46784-tigera-ca-bundle\") pod \"calico-kube-controllers-c8b546f5c-hth6g\" (UID: \"7c490c5d-dffd-485c-b60c-9e71e8a46784\") " pod="calico-system/calico-kube-controllers-c8b546f5c-hth6g" Sep 13 02:42:02.827801 kubelet[2886]: I0913 02:42:02.827783 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39391357-6cd5-4640-876b-98d0772daef3-config-volume\") pod \"coredns-674b8bbfcf-w9fdx\" (UID: \"39391357-6cd5-4640-876b-98d0772daef3\") " pod="kube-system/coredns-674b8bbfcf-w9fdx" Sep 13 02:42:02.828597 kubelet[2886]: I0913 02:42:02.827816 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33a25dab-4bd4-4636-89ad-04ed566fe785-config-volume\") pod \"coredns-674b8bbfcf-284nk\" (UID: \"33a25dab-4bd4-4636-89ad-04ed566fe785\") " pod="kube-system/coredns-674b8bbfcf-284nk" Sep 13 02:42:02.828597 kubelet[2886]: I0913 02:42:02.827845 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ghw\" (UniqueName: \"kubernetes.io/projected/3beedf30-6e52-4699-905c-e5a623f9e36f-kube-api-access-26ghw\") pod \"calico-apiserver-6cc55dd8c9-7shds\" (UID: \"3beedf30-6e52-4699-905c-e5a623f9e36f\") " pod="calico-apiserver/calico-apiserver-6cc55dd8c9-7shds" Sep 13 02:42:02.828597 kubelet[2886]: I0913 02:42:02.827889 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8p8\" (UniqueName: \"kubernetes.io/projected/8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd-kube-api-access-8r8p8\") pod \"calico-apiserver-6cc55dd8c9-t2tjc\" (UID: \"8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd\") " pod="calico-apiserver/calico-apiserver-6cc55dd8c9-t2tjc" Sep 13 02:42:02.828597 kubelet[2886]: I0913 02:42:02.827920 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkl7d\" (UniqueName: \"kubernetes.io/projected/7c490c5d-dffd-485c-b60c-9e71e8a46784-kube-api-access-nkl7d\") pod \"calico-kube-controllers-c8b546f5c-hth6g\" (UID: \"7c490c5d-dffd-485c-b60c-9e71e8a46784\") " pod="calico-system/calico-kube-controllers-c8b546f5c-hth6g" Sep 13 02:42:02.828597 kubelet[2886]: I0913 02:42:02.827953 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwh2\" (UniqueName: \"kubernetes.io/projected/39391357-6cd5-4640-876b-98d0772daef3-kube-api-access-2mwh2\") pod \"coredns-674b8bbfcf-w9fdx\" (UID: \"39391357-6cd5-4640-876b-98d0772daef3\") " pod="kube-system/coredns-674b8bbfcf-w9fdx" Sep 13 02:42:02.829361 kubelet[2886]: I0913 02:42:02.828008 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znqc\" (UniqueName: \"kubernetes.io/projected/33a25dab-4bd4-4636-89ad-04ed566fe785-kube-api-access-4znqc\") pod \"coredns-674b8bbfcf-284nk\" (UID: \"33a25dab-4bd4-4636-89ad-04ed566fe785\") " pod="kube-system/coredns-674b8bbfcf-284nk" Sep 13 02:42:02.829361 kubelet[2886]: I0913 02:42:02.828069 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd-calico-apiserver-certs\") pod \"calico-apiserver-6cc55dd8c9-t2tjc\" (UID: \"8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd\") " pod="calico-apiserver/calico-apiserver-6cc55dd8c9-t2tjc" Sep 13 02:42:02.829361 kubelet[2886]: I0913 02:42:02.828102 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3beedf30-6e52-4699-905c-e5a623f9e36f-calico-apiserver-certs\") pod \"calico-apiserver-6cc55dd8c9-7shds\" (UID: \"3beedf30-6e52-4699-905c-e5a623f9e36f\") " pod="calico-apiserver/calico-apiserver-6cc55dd8c9-7shds" Sep 13 02:42:02.837338 systemd[1]: Created slice kubepods-besteffort-pod7c490c5d_dffd_485c_b60c_9e71e8a46784.slice - libcontainer container kubepods-besteffort-pod7c490c5d_dffd_485c_b60c_9e71e8a46784.slice. Sep 13 02:42:02.847091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-792a03a38129cec02ea9d09bc15ded2cc1787a8efef419bfb0fef9362a72ecc6-rootfs.mount: Deactivated successfully. Sep 13 02:42:02.953359 kubelet[2886]: I0913 02:42:02.953051 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-ca-bundle\") pod \"whisker-5dcf579b8d-xldwd\" (UID: \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\") " pod="calico-system/whisker-5dcf579b8d-xldwd" Sep 13 02:42:02.957048 kubelet[2886]: I0913 02:42:02.953218 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7l7\" (UniqueName: \"kubernetes.io/projected/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-kube-api-access-hd7l7\") pod \"whisker-5dcf579b8d-xldwd\" (UID: \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\") " pod="calico-system/whisker-5dcf579b8d-xldwd" Sep 13 02:42:02.957048 kubelet[2886]: I0913 02:42:02.956360 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-backend-key-pair\") pod \"whisker-5dcf579b8d-xldwd\" (UID: \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\") " pod="calico-system/whisker-5dcf579b8d-xldwd" Sep 13 02:42:02.957048 kubelet[2886]: I0913 02:42:02.956527 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad-config\") pod \"goldmane-54d579b49d-d8lgq\" (UID: \"f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad\") " pod="calico-system/goldmane-54d579b49d-d8lgq" Sep 13 02:42:02.957048 kubelet[2886]: I0913 02:42:02.956669 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-d8lgq\" (UID: \"f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad\") " pod="calico-system/goldmane-54d579b49d-d8lgq" Sep 13 02:42:02.957048 kubelet[2886]: I0913 02:42:02.956783 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad-goldmane-key-pair\") pod \"goldmane-54d579b49d-d8lgq\" (UID: \"f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad\") " pod="calico-system/goldmane-54d579b49d-d8lgq" Sep 13 02:42:02.957684 kubelet[2886]: I0913 02:42:02.956905 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgs7m\" (UniqueName: \"kubernetes.io/projected/f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad-kube-api-access-hgs7m\") pod \"goldmane-54d579b49d-d8lgq\" (UID: \"f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad\") " pod="calico-system/goldmane-54d579b49d-d8lgq" Sep 13 02:42:02.974650 systemd[1]: Created slice kubepods-besteffort-pod8b6d8159_d9eb_461b_ab8c_1a1a3c2694dd.slice - libcontainer container kubepods-besteffort-pod8b6d8159_d9eb_461b_ab8c_1a1a3c2694dd.slice. Sep 13 02:42:03.081547 containerd[1592]: time="2025-09-13T02:42:03.081372550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 02:42:03.086370 systemd[1]: Created slice kubepods-besteffort-pod3beedf30_6e52_4699_905c_e5a623f9e36f.slice - libcontainer container kubepods-besteffort-pod3beedf30_6e52_4699_905c_e5a623f9e36f.slice. Sep 13 02:42:03.089264 containerd[1592]: time="2025-09-13T02:42:03.089227759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9fdx,Uid:39391357-6cd5-4640-876b-98d0772daef3,Namespace:kube-system,Attempt:0,}" Sep 13 02:42:03.096273 containerd[1592]: time="2025-09-13T02:42:03.096230514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-7shds,Uid:3beedf30-6e52-4699-905c-e5a623f9e36f,Namespace:calico-apiserver,Attempt:0,}" Sep 13 02:42:03.114362 systemd[1]: Created slice kubepods-besteffort-pod2a5ecd3c_8469_4118_ac7a_41f73ef3d954.slice - libcontainer container kubepods-besteffort-pod2a5ecd3c_8469_4118_ac7a_41f73ef3d954.slice. Sep 13 02:42:03.135548 containerd[1592]: time="2025-09-13T02:42:03.135499170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-284nk,Uid:33a25dab-4bd4-4636-89ad-04ed566fe785,Namespace:kube-system,Attempt:0,}" Sep 13 02:42:03.146649 systemd[1]: Created slice kubepods-besteffort-podf66af8bf_59fd_4cb6_b3ee_fe9b3ef8cfad.slice - libcontainer container kubepods-besteffort-podf66af8bf_59fd_4cb6_b3ee_fe9b3ef8cfad.slice. Sep 13 02:42:03.180287 systemd[1]: Created slice kubepods-besteffort-podfa4dd181_bd58_4b28_95b9_780c8fc5de89.slice - libcontainer container kubepods-besteffort-podfa4dd181_bd58_4b28_95b9_780c8fc5de89.slice. Sep 13 02:42:03.195062 containerd[1592]: time="2025-09-13T02:42:03.194932890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw4fm,Uid:fa4dd181-bd58-4b28-95b9-780c8fc5de89,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:03.212307 containerd[1592]: time="2025-09-13T02:42:03.211622415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8lgq,Uid:f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:03.262573 containerd[1592]: time="2025-09-13T02:42:03.262505485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8b546f5c-hth6g,Uid:7c490c5d-dffd-485c-b60c-9e71e8a46784,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:03.340347 containerd[1592]: time="2025-09-13T02:42:03.339974714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-t2tjc,Uid:8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd,Namespace:calico-apiserver,Attempt:0,}" Sep 13 02:42:03.460927 containerd[1592]: time="2025-09-13T02:42:03.460638570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dcf579b8d-xldwd,Uid:2a5ecd3c-8469-4118-ac7a-41f73ef3d954,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:03.585838 containerd[1592]: time="2025-09-13T02:42:03.585662992Z" level=error msg="Failed to destroy network for sandbox \"1d7b5755b7f9f18ecaaf69707fdc5b5fd2bc7b4e8b27fb1e8f1b6f487eb2abdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.612586 containerd[1592]: time="2025-09-13T02:42:03.589381182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-7shds,Uid:3beedf30-6e52-4699-905c-e5a623f9e36f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d7b5755b7f9f18ecaaf69707fdc5b5fd2bc7b4e8b27fb1e8f1b6f487eb2abdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.616350 kubelet[2886]: E0913 02:42:03.615904 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d7b5755b7f9f18ecaaf69707fdc5b5fd2bc7b4e8b27fb1e8f1b6f487eb2abdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.616350 kubelet[2886]: E0913 02:42:03.616147 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d7b5755b7f9f18ecaaf69707fdc5b5fd2bc7b4e8b27fb1e8f1b6f487eb2abdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-7shds" Sep 13 02:42:03.616350 kubelet[2886]: E0913 02:42:03.616225 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d7b5755b7f9f18ecaaf69707fdc5b5fd2bc7b4e8b27fb1e8f1b6f487eb2abdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-7shds" Sep 13 02:42:03.617138 kubelet[2886]: E0913 02:42:03.616953 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc55dd8c9-7shds_calico-apiserver(3beedf30-6e52-4699-905c-e5a623f9e36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc55dd8c9-7shds_calico-apiserver(3beedf30-6e52-4699-905c-e5a623f9e36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d7b5755b7f9f18ecaaf69707fdc5b5fd2bc7b4e8b27fb1e8f1b6f487eb2abdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-7shds" podUID="3beedf30-6e52-4699-905c-e5a623f9e36f" Sep 13 02:42:03.683351 containerd[1592]: time="2025-09-13T02:42:03.683273823Z" level=error msg="Failed to destroy network for sandbox \"6c66d54ace7cc6b47501d303f6c61760515d7d6fd814ffbd222e0f53179e224b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.691248 containerd[1592]: time="2025-09-13T02:42:03.690978034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-t2tjc,Uid:8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c66d54ace7cc6b47501d303f6c61760515d7d6fd814ffbd222e0f53179e224b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.691801 kubelet[2886]: E0913 02:42:03.691629 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c66d54ace7cc6b47501d303f6c61760515d7d6fd814ffbd222e0f53179e224b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.691992 kubelet[2886]: E0913 02:42:03.691834 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c66d54ace7cc6b47501d303f6c61760515d7d6fd814ffbd222e0f53179e224b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-t2tjc" Sep 13 02:42:03.691992 kubelet[2886]: E0913 02:42:03.691881 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c66d54ace7cc6b47501d303f6c61760515d7d6fd814ffbd222e0f53179e224b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-t2tjc" Sep 13 02:42:03.692195 containerd[1592]: time="2025-09-13T02:42:03.692163069Z" level=error msg="Failed to destroy network for sandbox \"fe487ede7d28186a8e639d409880a2d09a298529e0fcde2883640da1b374957d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.694202 kubelet[2886]: E0913 02:42:03.692337 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc55dd8c9-t2tjc_calico-apiserver(8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc55dd8c9-t2tjc_calico-apiserver(8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c66d54ace7cc6b47501d303f6c61760515d7d6fd814ffbd222e0f53179e224b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-t2tjc" podUID="8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd" Sep 13 02:42:03.701189 containerd[1592]: time="2025-09-13T02:42:03.701111027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw4fm,Uid:fa4dd181-bd58-4b28-95b9-780c8fc5de89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe487ede7d28186a8e639d409880a2d09a298529e0fcde2883640da1b374957d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.701618 kubelet[2886]: E0913 02:42:03.701570 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe487ede7d28186a8e639d409880a2d09a298529e0fcde2883640da1b374957d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.701828 kubelet[2886]: E0913 02:42:03.701787 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe487ede7d28186a8e639d409880a2d09a298529e0fcde2883640da1b374957d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:42:03.702001 kubelet[2886]: E0913 02:42:03.701971 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe487ede7d28186a8e639d409880a2d09a298529e0fcde2883640da1b374957d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gw4fm" Sep 13 02:42:03.704093 kubelet[2886]: E0913 02:42:03.702230 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gw4fm_calico-system(fa4dd181-bd58-4b28-95b9-780c8fc5de89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gw4fm_calico-system(fa4dd181-bd58-4b28-95b9-780c8fc5de89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe487ede7d28186a8e639d409880a2d09a298529e0fcde2883640da1b374957d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gw4fm" podUID="fa4dd181-bd58-4b28-95b9-780c8fc5de89" Sep 13 02:42:03.712295 containerd[1592]: time="2025-09-13T02:42:03.712238082Z" level=error msg="Failed to destroy network for sandbox \"78c7604ff198ca73003a8b9c05e28b66cd9a65647ffe2c7e8ab5fda3f81e44ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.716443 containerd[1592]: time="2025-09-13T02:42:03.716396600Z" level=error msg="Failed to destroy network for sandbox \"63231c86453c23e71c380a4011c95342c1a2ea26d82dd4b3063c1c1ebc45c93a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.717616 containerd[1592]: time="2025-09-13T02:42:03.717575595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-284nk,Uid:33a25dab-4bd4-4636-89ad-04ed566fe785,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7604ff198ca73003a8b9c05e28b66cd9a65647ffe2c7e8ab5fda3f81e44ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.718112 kubelet[2886]: E0913 02:42:03.718061 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7604ff198ca73003a8b9c05e28b66cd9a65647ffe2c7e8ab5fda3f81e44ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.719138 kubelet[2886]: E0913 02:42:03.718320 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7604ff198ca73003a8b9c05e28b66cd9a65647ffe2c7e8ab5fda3f81e44ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-284nk" Sep 13 02:42:03.719138 kubelet[2886]: E0913 02:42:03.718398 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7604ff198ca73003a8b9c05e28b66cd9a65647ffe2c7e8ab5fda3f81e44ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-284nk" Sep 13 02:42:03.719138 kubelet[2886]: E0913 02:42:03.718580 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-284nk_kube-system(33a25dab-4bd4-4636-89ad-04ed566fe785)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-284nk_kube-system(33a25dab-4bd4-4636-89ad-04ed566fe785)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78c7604ff198ca73003a8b9c05e28b66cd9a65647ffe2c7e8ab5fda3f81e44ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-284nk" podUID="33a25dab-4bd4-4636-89ad-04ed566fe785" Sep 13 02:42:03.722869 containerd[1592]: time="2025-09-13T02:42:03.722794533Z" level=error msg="Failed to destroy network for sandbox \"e388d1bbfb4b8f9aaa24659b195225618ee620a08f7bbbd8b15736fcd4d31f28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.730684 containerd[1592]: time="2025-09-13T02:42:03.730613603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9fdx,Uid:39391357-6cd5-4640-876b-98d0772daef3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63231c86453c23e71c380a4011c95342c1a2ea26d82dd4b3063c1c1ebc45c93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.732175 kubelet[2886]: E0913 02:42:03.731003 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63231c86453c23e71c380a4011c95342c1a2ea26d82dd4b3063c1c1ebc45c93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.732614 kubelet[2886]: E0913 02:42:03.732209 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63231c86453c23e71c380a4011c95342c1a2ea26d82dd4b3063c1c1ebc45c93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w9fdx" Sep 13 02:42:03.732614 kubelet[2886]: E0913 02:42:03.732244 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63231c86453c23e71c380a4011c95342c1a2ea26d82dd4b3063c1c1ebc45c93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w9fdx" Sep 13 02:42:03.732614 kubelet[2886]: E0913 02:42:03.732338 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w9fdx_kube-system(39391357-6cd5-4640-876b-98d0772daef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w9fdx_kube-system(39391357-6cd5-4640-876b-98d0772daef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63231c86453c23e71c380a4011c95342c1a2ea26d82dd4b3063c1c1ebc45c93a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w9fdx" podUID="39391357-6cd5-4640-876b-98d0772daef3" Sep 13 02:42:03.741985 containerd[1592]: time="2025-09-13T02:42:03.741897505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8b546f5c-hth6g,Uid:7c490c5d-dffd-485c-b60c-9e71e8a46784,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e388d1bbfb4b8f9aaa24659b195225618ee620a08f7bbbd8b15736fcd4d31f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.742798 kubelet[2886]: E0913 02:42:03.742750 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e388d1bbfb4b8f9aaa24659b195225618ee620a08f7bbbd8b15736fcd4d31f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.743229 kubelet[2886]: E0913 02:42:03.743137 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e388d1bbfb4b8f9aaa24659b195225618ee620a08f7bbbd8b15736fcd4d31f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8b546f5c-hth6g" Sep 13 02:42:03.743377 kubelet[2886]: E0913 02:42:03.743348 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e388d1bbfb4b8f9aaa24659b195225618ee620a08f7bbbd8b15736fcd4d31f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8b546f5c-hth6g" Sep 13 02:42:03.743721 kubelet[2886]: E0913 02:42:03.743670 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c8b546f5c-hth6g_calico-system(7c490c5d-dffd-485c-b60c-9e71e8a46784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c8b546f5c-hth6g_calico-system(7c490c5d-dffd-485c-b60c-9e71e8a46784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e388d1bbfb4b8f9aaa24659b195225618ee620a08f7bbbd8b15736fcd4d31f28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8b546f5c-hth6g" podUID="7c490c5d-dffd-485c-b60c-9e71e8a46784" Sep 13 02:42:03.745676 containerd[1592]: time="2025-09-13T02:42:03.745528107Z" level=error msg="Failed to destroy network for sandbox \"dcb515db711410157355a0483be223ffab2588b90ea70d8665b6a4a5df5a6283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.748197 containerd[1592]: time="2025-09-13T02:42:03.748004409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8lgq,Uid:f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb515db711410157355a0483be223ffab2588b90ea70d8665b6a4a5df5a6283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.749370 kubelet[2886]: E0913 02:42:03.749260 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb515db711410157355a0483be223ffab2588b90ea70d8665b6a4a5df5a6283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.750640 kubelet[2886]: E0913 02:42:03.749622 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb515db711410157355a0483be223ffab2588b90ea70d8665b6a4a5df5a6283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8lgq" Sep 13 02:42:03.750640 kubelet[2886]: E0913 02:42:03.750083 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb515db711410157355a0483be223ffab2588b90ea70d8665b6a4a5df5a6283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8lgq" Sep 13 02:42:03.750640 kubelet[2886]: E0913 02:42:03.750176 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d8lgq_calico-system(f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d8lgq_calico-system(f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcb515db711410157355a0483be223ffab2588b90ea70d8665b6a4a5df5a6283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d8lgq" podUID="f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad" Sep 13 02:42:03.763564 containerd[1592]: time="2025-09-13T02:42:03.762173289Z" level=error msg="Failed to destroy network for sandbox \"a32a8cb2021a9efc687b1ac02d0f27a29dea148fc4e7a86136f7ba7fe229cc1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.768056 containerd[1592]: time="2025-09-13T02:42:03.767972591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dcf579b8d-xldwd,Uid:2a5ecd3c-8469-4118-ac7a-41f73ef3d954,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a8cb2021a9efc687b1ac02d0f27a29dea148fc4e7a86136f7ba7fe229cc1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.770053 kubelet[2886]: E0913 02:42:03.769299 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a8cb2021a9efc687b1ac02d0f27a29dea148fc4e7a86136f7ba7fe229cc1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 02:42:03.770053 kubelet[2886]: E0913 02:42:03.769377 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a8cb2021a9efc687b1ac02d0f27a29dea148fc4e7a86136f7ba7fe229cc1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dcf579b8d-xldwd" Sep 13 02:42:03.770053 kubelet[2886]: E0913 02:42:03.769476 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a8cb2021a9efc687b1ac02d0f27a29dea148fc4e7a86136f7ba7fe229cc1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dcf579b8d-xldwd" Sep 13 02:42:03.772883 kubelet[2886]: E0913 02:42:03.769546 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dcf579b8d-xldwd_calico-system(2a5ecd3c-8469-4118-ac7a-41f73ef3d954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dcf579b8d-xldwd_calico-system(2a5ecd3c-8469-4118-ac7a-41f73ef3d954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a32a8cb2021a9efc687b1ac02d0f27a29dea148fc4e7a86136f7ba7fe229cc1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dcf579b8d-xldwd" podUID="2a5ecd3c-8469-4118-ac7a-41f73ef3d954" Sep 13 02:42:13.293764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86227729.mount: Deactivated successfully. Sep 13 02:42:13.428052 containerd[1592]: time="2025-09-13T02:42:13.391666224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 02:42:13.433496 containerd[1592]: time="2025-09-13T02:42:13.432922864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:13.485894 containerd[1592]: time="2025-09-13T02:42:13.485781422Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:13.494839 containerd[1592]: time="2025-09-13T02:42:13.494798830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:13.496955 containerd[1592]: time="2025-09-13T02:42:13.496864130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.414090067s" Sep 13 02:42:13.496955 containerd[1592]: time="2025-09-13T02:42:13.496917959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 02:42:13.541395 containerd[1592]: time="2025-09-13T02:42:13.541057511Z" level=info msg="CreateContainer within sandbox \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 02:42:13.606363 containerd[1592]: time="2025-09-13T02:42:13.606218212Z" level=info msg="Container 8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:13.606508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531045632.mount: Deactivated successfully. Sep 13 02:42:13.659919 containerd[1592]: time="2025-09-13T02:42:13.659841962Z" level=info msg="CreateContainer within sandbox \"86338bcfc996940d5d65874d13a468c036cf519233e860912bd0385a4ab2466d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\"" Sep 13 02:42:13.662753 containerd[1592]: time="2025-09-13T02:42:13.662606208Z" level=info msg="StartContainer for \"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\"" Sep 13 02:42:13.669190 containerd[1592]: time="2025-09-13T02:42:13.669146286Z" level=info msg="connecting to shim 8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3" address="unix:///run/containerd/s/3b233c501c439c6b11096629d3b6065119fb6c451dd0498fe6df88b6b913d386" protocol=ttrpc version=3 Sep 13 02:42:13.846294 systemd[1]: Started cri-containerd-8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3.scope - libcontainer container 8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3. Sep 13 02:42:13.983117 containerd[1592]: time="2025-09-13T02:42:13.982680169Z" level=info msg="StartContainer for \"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\" returns successfully" Sep 13 02:42:14.138068 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 02:42:14.139162 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 02:42:14.385826 kubelet[2886]: I0913 02:42:14.384259 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zkx8h" podStartSLOduration=1.725527339 podStartE2EDuration="24.382112246s" podCreationTimestamp="2025-09-13 02:41:50 +0000 UTC" firstStartedPulling="2025-09-13 02:41:50.848633528 +0000 UTC m=+22.370061856" lastFinishedPulling="2025-09-13 02:42:13.505218428 +0000 UTC m=+45.026646763" observedRunningTime="2025-09-13 02:42:14.18396118 +0000 UTC m=+45.705389528" watchObservedRunningTime="2025-09-13 02:42:14.382112246 +0000 UTC m=+45.903540589" Sep 13 02:42:14.446599 kubelet[2886]: I0913 02:42:14.446517 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-backend-key-pair\") pod \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\" (UID: \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\") " Sep 13 02:42:14.446920 kubelet[2886]: I0913 02:42:14.446616 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-ca-bundle\") pod \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\" (UID: \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\") " Sep 13 02:42:14.446920 kubelet[2886]: I0913 02:42:14.446671 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd7l7\" (UniqueName: \"kubernetes.io/projected/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-kube-api-access-hd7l7\") pod \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\" (UID: \"2a5ecd3c-8469-4118-ac7a-41f73ef3d954\") " Sep 13 02:42:14.500329 systemd[1]: var-lib-kubelet-pods-2a5ecd3c\x2d8469\x2d4118\x2dac7a\x2d41f73ef3d954-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhd7l7.mount: Deactivated successfully. Sep 13 02:42:14.521613 kubelet[2886]: I0913 02:42:14.520973 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-kube-api-access-hd7l7" (OuterVolumeSpecName: "kube-api-access-hd7l7") pod "2a5ecd3c-8469-4118-ac7a-41f73ef3d954" (UID: "2a5ecd3c-8469-4118-ac7a-41f73ef3d954"). InnerVolumeSpecName "kube-api-access-hd7l7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:42:14.523584 kubelet[2886]: I0913 02:42:14.522333 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2a5ecd3c-8469-4118-ac7a-41f73ef3d954" (UID: "2a5ecd3c-8469-4118-ac7a-41f73ef3d954"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:42:14.528702 kubelet[2886]: I0913 02:42:14.528614 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2a5ecd3c-8469-4118-ac7a-41f73ef3d954" (UID: "2a5ecd3c-8469-4118-ac7a-41f73ef3d954"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:42:14.529569 systemd[1]: var-lib-kubelet-pods-2a5ecd3c\x2d8469\x2d4118\x2dac7a\x2d41f73ef3d954-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 02:42:14.547354 kubelet[2886]: I0913 02:42:14.547290 2886 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-ca-bundle\") on node \"srv-m9tmw.gb1.brightbox.com\" DevicePath \"\"" Sep 13 02:42:14.547354 kubelet[2886]: I0913 02:42:14.547335 2886 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hd7l7\" (UniqueName: \"kubernetes.io/projected/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-kube-api-access-hd7l7\") on node \"srv-m9tmw.gb1.brightbox.com\" DevicePath \"\"" Sep 13 02:42:14.547354 kubelet[2886]: I0913 02:42:14.547356 2886 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a5ecd3c-8469-4118-ac7a-41f73ef3d954-whisker-backend-key-pair\") on node \"srv-m9tmw.gb1.brightbox.com\" DevicePath \"\"" Sep 13 02:42:14.591874 kubelet[2886]: I0913 02:42:14.591499 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 02:42:14.744720 containerd[1592]: time="2025-09-13T02:42:14.743724997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8b546f5c-hth6g,Uid:7c490c5d-dffd-485c-b60c-9e71e8a46784,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:14.748047 containerd[1592]: time="2025-09-13T02:42:14.747521323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8lgq,Uid:f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:14.749050 containerd[1592]: time="2025-09-13T02:42:14.748742631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw4fm,Uid:fa4dd181-bd58-4b28-95b9-780c8fc5de89,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:14.857963 systemd[1]: Removed slice kubepods-besteffort-pod2a5ecd3c_8469_4118_ac7a_41f73ef3d954.slice - libcontainer container kubepods-besteffort-pod2a5ecd3c_8469_4118_ac7a_41f73ef3d954.slice. Sep 13 02:42:15.384729 systemd[1]: Created slice kubepods-besteffort-pod089a9cbc_2222_4810_a7b6_11adfc37f9f4.slice - libcontainer container kubepods-besteffort-pod089a9cbc_2222_4810_a7b6_11adfc37f9f4.slice. Sep 13 02:42:15.457523 kubelet[2886]: I0913 02:42:15.457204 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwpvm\" (UniqueName: \"kubernetes.io/projected/089a9cbc-2222-4810-a7b6-11adfc37f9f4-kube-api-access-rwpvm\") pod \"whisker-6ddd8c889f-nwdcv\" (UID: \"089a9cbc-2222-4810-a7b6-11adfc37f9f4\") " pod="calico-system/whisker-6ddd8c889f-nwdcv" Sep 13 02:42:15.460627 kubelet[2886]: I0913 02:42:15.459964 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/089a9cbc-2222-4810-a7b6-11adfc37f9f4-whisker-backend-key-pair\") pod \"whisker-6ddd8c889f-nwdcv\" (UID: \"089a9cbc-2222-4810-a7b6-11adfc37f9f4\") " pod="calico-system/whisker-6ddd8c889f-nwdcv" Sep 13 02:42:15.460627 kubelet[2886]: I0913 02:42:15.460145 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/089a9cbc-2222-4810-a7b6-11adfc37f9f4-whisker-ca-bundle\") pod \"whisker-6ddd8c889f-nwdcv\" (UID: \"089a9cbc-2222-4810-a7b6-11adfc37f9f4\") " pod="calico-system/whisker-6ddd8c889f-nwdcv" Sep 13 02:42:15.510483 systemd-networkd[1498]: cali6108b97c576: Link UP Sep 13 02:42:15.512820 systemd-networkd[1498]: cali6108b97c576: Gained carrier Sep 13 02:42:15.573470 containerd[1592]: 2025-09-13 02:42:14.987 [INFO][3910] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:15.573470 containerd[1592]: 2025-09-13 02:42:15.026 [INFO][3910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0 csi-node-driver- calico-system fa4dd181-bd58-4b28-95b9-780c8fc5de89 712 0 2025-09-13 02:41:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com csi-node-driver-gw4fm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6108b97c576 [] [] }} ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-" Sep 13 02:42:15.573470 containerd[1592]: 2025-09-13 02:42:15.026 [INFO][3910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.573470 containerd[1592]: 2025-09-13 02:42:15.312 [INFO][3943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" HandleID="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Workload="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.315 [INFO][3943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" HandleID="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Workload="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003882e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"csi-node-driver-gw4fm", "timestamp":"2025-09-13 02:42:15.312114573 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.315 [INFO][3943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.315 [INFO][3943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.317 [INFO][3943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.376 [INFO][3943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.429 [INFO][3943] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.437 [INFO][3943] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.440 [INFO][3943] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575462 containerd[1592]: 2025-09-13 02:42:15.444 [INFO][3943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.444 [INFO][3943] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.448 [INFO][3943] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.459 [INFO][3943] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.470 [INFO][3943] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.129/26] block=192.168.55.128/26 handle="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.471 [INFO][3943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.129/26] handle="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.471 [INFO][3943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:15.575899 containerd[1592]: 2025-09-13 02:42:15.471 [INFO][3943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.129/26] IPv6=[] ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" HandleID="k8s-pod-network.d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Workload="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.576234 containerd[1592]: 2025-09-13 02:42:15.480 [INFO][3910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa4dd181-bd58-4b28-95b9-780c8fc5de89", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-gw4fm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6108b97c576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:15.576381 containerd[1592]: 2025-09-13 02:42:15.481 [INFO][3910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.129/32] ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.576381 containerd[1592]: 2025-09-13 02:42:15.481 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6108b97c576 ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.576381 containerd[1592]: 2025-09-13 02:42:15.508 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.576513 containerd[1592]: 2025-09-13 02:42:15.519 [INFO][3910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa4dd181-bd58-4b28-95b9-780c8fc5de89", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d", Pod:"csi-node-driver-gw4fm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6108b97c576", MAC:"82:20:73:92:e5:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:15.576617 containerd[1592]: 2025-09-13 02:42:15.561 [INFO][3910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" Namespace="calico-system" Pod="csi-node-driver-gw4fm" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-csi--node--driver--gw4fm-eth0" Sep 13 02:42:15.707010 systemd-networkd[1498]: cali799effeb043: Link UP Sep 13 02:42:15.713460 systemd-networkd[1498]: cali799effeb043: Gained carrier Sep 13 02:42:15.715429 containerd[1592]: time="2025-09-13T02:42:15.715341227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ddd8c889f-nwdcv,Uid:089a9cbc-2222-4810-a7b6-11adfc37f9f4,Namespace:calico-system,Attempt:0,}" Sep 13 02:42:15.744220 containerd[1592]: time="2025-09-13T02:42:15.742636662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-t2tjc,Uid:8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd,Namespace:calico-apiserver,Attempt:0,}" Sep 13 02:42:15.824585 containerd[1592]: 2025-09-13 02:42:14.973 [INFO][3903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:15.824585 containerd[1592]: 2025-09-13 02:42:15.028 [INFO][3903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0 calico-kube-controllers-c8b546f5c- calico-system 7c490c5d-dffd-485c-b60c-9e71e8a46784 822 0 2025-09-13 02:41:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c8b546f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com calico-kube-controllers-c8b546f5c-hth6g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali799effeb043 [] [] }} ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-" Sep 13 02:42:15.824585 containerd[1592]: 2025-09-13 02:42:15.028 [INFO][3903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.824585 containerd[1592]: 2025-09-13 02:42:15.311 [INFO][3946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" HandleID="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.313 [INFO][3946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" HandleID="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328410), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"calico-kube-controllers-c8b546f5c-hth6g", "timestamp":"2025-09-13 02:42:15.311519489 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.313 [INFO][3946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.471 [INFO][3946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.471 [INFO][3946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.493 [INFO][3946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.538 [INFO][3946] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.602 [INFO][3946] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.612 [INFO][3946] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825452 containerd[1592]: 2025-09-13 02:42:15.618 [INFO][3946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.618 [INFO][3946] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.621 [INFO][3946] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1 Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.639 [INFO][3946] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.673 [INFO][3946] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.130/26] block=192.168.55.128/26 handle="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.675 [INFO][3946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.130/26] handle="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.675 [INFO][3946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:15.825871 containerd[1592]: 2025-09-13 02:42:15.675 [INFO][3946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.130/26] IPv6=[] ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" HandleID="k8s-pod-network.3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.827445 containerd[1592]: 2025-09-13 02:42:15.691 [INFO][3903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0", GenerateName:"calico-kube-controllers-c8b546f5c-", Namespace:"calico-system", SelfLink:"", UID:"7c490c5d-dffd-485c-b60c-9e71e8a46784", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8b546f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-c8b546f5c-hth6g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali799effeb043", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:15.827546 containerd[1592]: 2025-09-13 02:42:15.692 [INFO][3903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.130/32] ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.827546 containerd[1592]: 2025-09-13 02:42:15.692 [INFO][3903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali799effeb043 ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.827546 containerd[1592]: 2025-09-13 02:42:15.714 [INFO][3903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.827664 containerd[1592]: 2025-09-13 02:42:15.723 [INFO][3903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0", GenerateName:"calico-kube-controllers-c8b546f5c-", Namespace:"calico-system", SelfLink:"", UID:"7c490c5d-dffd-485c-b60c-9e71e8a46784", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8b546f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1", Pod:"calico-kube-controllers-c8b546f5c-hth6g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali799effeb043", MAC:"e2:e3:b5:ea:ac:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:15.827792 containerd[1592]: 2025-09-13 02:42:15.772 [INFO][3903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" Namespace="calico-system" Pod="calico-kube-controllers-c8b546f5c-hth6g" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--kube--controllers--c8b546f5c--hth6g-eth0" Sep 13 02:42:15.893642 containerd[1592]: time="2025-09-13T02:42:15.893016347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\" id:\"f976310b5dc9b5b5766423f9913acc2a994d6561512860746d5348bbfbcb7457\" pid:3983 exit_status:1 exited_at:{seconds:1757731335 nanos:798776645}" Sep 13 02:42:15.922214 systemd-networkd[1498]: cali7a649e30ddd: Link UP Sep 13 02:42:15.932707 systemd-networkd[1498]: cali7a649e30ddd: Gained carrier Sep 13 02:42:15.936092 containerd[1592]: time="2025-09-13T02:42:15.936024353Z" level=info msg="connecting to shim d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d" address="unix:///run/containerd/s/c46889930e63ca9d2557f46c612f31069d86b4ec54818e8550e806bf9054301b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:15.977916 containerd[1592]: time="2025-09-13T02:42:15.977303566Z" level=info msg="connecting to shim 3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1" address="unix:///run/containerd/s/a56c486e38563bc33ad10bf8ef6873ec4c52564624d5b2cd516c68f8916e09fa" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:15.984110 containerd[1592]: 2025-09-13 02:42:14.899 [INFO][3908] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:15.984110 containerd[1592]: 2025-09-13 02:42:15.003 [INFO][3908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0 goldmane-54d579b49d- calico-system f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad 829 0 2025-09-13 02:41:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com goldmane-54d579b49d-d8lgq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a649e30ddd [] [] }} ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-" Sep 13 02:42:15.984110 containerd[1592]: 2025-09-13 02:42:15.005 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:15.984110 containerd[1592]: 2025-09-13 02:42:15.312 [INFO][3938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" HandleID="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Workload="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.315 [INFO][3938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" HandleID="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Workload="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7270), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"goldmane-54d579b49d-d8lgq", "timestamp":"2025-09-13 02:42:15.309543454 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.316 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.678 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.678 [INFO][3938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.705 [INFO][3938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.737 [INFO][3938] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.767 [INFO][3938] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.787 [INFO][3938] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984472 containerd[1592]: 2025-09-13 02:42:15.796 [INFO][3938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.798 [INFO][3938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.810 [INFO][3938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074 Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.837 [INFO][3938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.857 [INFO][3938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.131/26] block=192.168.55.128/26 handle="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.857 [INFO][3938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.131/26] handle="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.857 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:15.984856 containerd[1592]: 2025-09-13 02:42:15.857 [INFO][3938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.131/26] IPv6=[] ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" HandleID="k8s-pod-network.fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Workload="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:15.985273 containerd[1592]: 2025-09-13 02:42:15.885 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-54d579b49d-d8lgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a649e30ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:15.987073 containerd[1592]: 2025-09-13 02:42:15.885 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.131/32] ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:15.987073 containerd[1592]: 2025-09-13 02:42:15.885 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a649e30ddd ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:15.987073 containerd[1592]: 2025-09-13 02:42:15.935 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:15.987252 containerd[1592]: 2025-09-13 02:42:15.936 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074", Pod:"goldmane-54d579b49d-d8lgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a649e30ddd", MAC:"6e:3b:83:c8:c0:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:15.987361 containerd[1592]: 2025-09-13 02:42:15.971 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" Namespace="calico-system" Pod="goldmane-54d579b49d-d8lgq" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-goldmane--54d579b49d--d8lgq-eth0" Sep 13 02:42:16.031581 systemd[1]: Started cri-containerd-3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1.scope - libcontainer container 3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1. Sep 13 02:42:16.128667 systemd[1]: Started cri-containerd-d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d.scope - libcontainer container d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d. Sep 13 02:42:16.182197 containerd[1592]: time="2025-09-13T02:42:16.179845378Z" level=info msg="connecting to shim fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074" address="unix:///run/containerd/s/7a2bcc1cdd2d4910ab9133778e44fa2398d0a829e83b73cf67bf686f9d801968" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:16.349313 systemd[1]: Started cri-containerd-fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074.scope - libcontainer container fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074. Sep 13 02:42:16.522531 systemd-networkd[1498]: cali5ad6449f062: Link UP Sep 13 02:42:16.526347 systemd-networkd[1498]: cali5ad6449f062: Gained carrier Sep 13 02:42:16.529888 containerd[1592]: time="2025-09-13T02:42:16.529383689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gw4fm,Uid:fa4dd181-bd58-4b28-95b9-780c8fc5de89,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d\"" Sep 13 02:42:16.540255 containerd[1592]: time="2025-09-13T02:42:16.540210579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 02:42:16.583322 containerd[1592]: 2025-09-13 02:42:16.149 [INFO][4006] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:16.583322 containerd[1592]: 2025-09-13 02:42:16.202 [INFO][4006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0 whisker-6ddd8c889f- calico-system 089a9cbc-2222-4810-a7b6-11adfc37f9f4 905 0 2025-09-13 02:42:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6ddd8c889f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com whisker-6ddd8c889f-nwdcv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5ad6449f062 [] [] }} ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-" Sep 13 02:42:16.583322 containerd[1592]: 2025-09-13 02:42:16.202 [INFO][4006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.583322 containerd[1592]: 2025-09-13 02:42:16.368 [INFO][4155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" HandleID="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Workload="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.369 [INFO][4155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" HandleID="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Workload="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381c50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"whisker-6ddd8c889f-nwdcv", "timestamp":"2025-09-13 02:42:16.36727811 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.369 [INFO][4155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.371 [INFO][4155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.371 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.391 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.409 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.422 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.432 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.583743 containerd[1592]: 2025-09-13 02:42:16.438 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.438 [INFO][4155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.444 [INFO][4155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2 Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.463 [INFO][4155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.499 [INFO][4155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.132/26] block=192.168.55.128/26 handle="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.499 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.132/26] handle="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.499 [INFO][4155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:16.585664 containerd[1592]: 2025-09-13 02:42:16.499 [INFO][4155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.132/26] IPv6=[] ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" HandleID="k8s-pod-network.a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Workload="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.588125 containerd[1592]: 2025-09-13 02:42:16.506 [INFO][4006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0", GenerateName:"whisker-6ddd8c889f-", Namespace:"calico-system", SelfLink:"", UID:"089a9cbc-2222-4810-a7b6-11adfc37f9f4", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ddd8c889f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6ddd8c889f-nwdcv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ad6449f062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:16.588125 containerd[1592]: 2025-09-13 02:42:16.506 [INFO][4006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.132/32] ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.588321 containerd[1592]: 2025-09-13 02:42:16.506 [INFO][4006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ad6449f062 ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.588321 containerd[1592]: 2025-09-13 02:42:16.531 [INFO][4006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.588436 containerd[1592]: 2025-09-13 02:42:16.536 [INFO][4006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0", GenerateName:"whisker-6ddd8c889f-", Namespace:"calico-system", SelfLink:"", UID:"089a9cbc-2222-4810-a7b6-11adfc37f9f4", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ddd8c889f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2", Pod:"whisker-6ddd8c889f-nwdcv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ad6449f062", MAC:"32:98:13:3d:68:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:16.588533 containerd[1592]: 2025-09-13 02:42:16.579 [INFO][4006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" Namespace="calico-system" Pod="whisker-6ddd8c889f-nwdcv" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-whisker--6ddd8c889f--nwdcv-eth0" Sep 13 02:42:16.677781 containerd[1592]: time="2025-09-13T02:42:16.677603802Z" level=info msg="connecting to shim a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2" address="unix:///run/containerd/s/6951a6e57e7039af03b275c8a90e13f318dcf9ed5a8469f4608a1dc396fb8364" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:16.729335 systemd[1]: Started cri-containerd-a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2.scope - libcontainer container a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2. Sep 13 02:42:16.737248 systemd-networkd[1498]: cali36bcb37b0ea: Link UP Sep 13 02:42:16.739164 systemd-networkd[1498]: cali36bcb37b0ea: Gained carrier Sep 13 02:42:16.752119 containerd[1592]: time="2025-09-13T02:42:16.750596214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9fdx,Uid:39391357-6cd5-4640-876b-98d0772daef3,Namespace:kube-system,Attempt:0,}" Sep 13 02:42:16.765978 containerd[1592]: time="2025-09-13T02:42:16.765907023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-284nk,Uid:33a25dab-4bd4-4636-89ad-04ed566fe785,Namespace:kube-system,Attempt:0,}" Sep 13 02:42:16.787309 kubelet[2886]: I0913 02:42:16.786105 2886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5ecd3c-8469-4118-ac7a-41f73ef3d954" path="/var/lib/kubelet/pods/2a5ecd3c-8469-4118-ac7a-41f73ef3d954/volumes" Sep 13 02:42:16.802364 containerd[1592]: time="2025-09-13T02:42:16.802293441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8b546f5c-hth6g,Uid:7c490c5d-dffd-485c-b60c-9e71e8a46784,Namespace:calico-system,Attempt:0,} returns sandbox id \"3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1\"" Sep 13 02:42:16.845569 containerd[1592]: 2025-09-13 02:42:16.092 [INFO][4017] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:16.845569 containerd[1592]: 2025-09-13 02:42:16.161 [INFO][4017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0 calico-apiserver-6cc55dd8c9- calico-apiserver 8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd 823 0 2025-09-13 02:41:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc55dd8c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com calico-apiserver-6cc55dd8c9-t2tjc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36bcb37b0ea [] [] }} ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-" Sep 13 02:42:16.845569 containerd[1592]: 2025-09-13 02:42:16.162 [INFO][4017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.845569 containerd[1592]: 2025-09-13 02:42:16.401 [INFO][4136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" HandleID="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.404 [INFO][4136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" HandleID="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"calico-apiserver-6cc55dd8c9-t2tjc", "timestamp":"2025-09-13 02:42:16.401654363 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.405 [INFO][4136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.500 [INFO][4136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.501 [INFO][4136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.569 [INFO][4136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.593 [INFO][4136] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.606 [INFO][4136] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.614 [INFO][4136] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847508 containerd[1592]: 2025-09-13 02:42:16.631 [INFO][4136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.638 [INFO][4136] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.660 [INFO][4136] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26 Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.683 [INFO][4136] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.707 [INFO][4136] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.133/26] block=192.168.55.128/26 handle="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.707 [INFO][4136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.133/26] handle="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.708 [INFO][4136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:16.847938 containerd[1592]: 2025-09-13 02:42:16.708 [INFO][4136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.133/26] IPv6=[] ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" HandleID="k8s-pod-network.c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.850580 containerd[1592]: 2025-09-13 02:42:16.724 [INFO][4017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0", GenerateName:"calico-apiserver-6cc55dd8c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc55dd8c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6cc55dd8c9-t2tjc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36bcb37b0ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:16.850710 containerd[1592]: 2025-09-13 02:42:16.727 [INFO][4017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.133/32] ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.850710 containerd[1592]: 2025-09-13 02:42:16.727 [INFO][4017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36bcb37b0ea ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.850710 containerd[1592]: 2025-09-13 02:42:16.740 [INFO][4017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.852212 containerd[1592]: 2025-09-13 02:42:16.744 [INFO][4017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0", GenerateName:"calico-apiserver-6cc55dd8c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc55dd8c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26", Pod:"calico-apiserver-6cc55dd8c9-t2tjc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36bcb37b0ea", MAC:"ce:c2:f0:02:15:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:16.852328 containerd[1592]: 2025-09-13 02:42:16.833 [INFO][4017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-t2tjc" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--t2tjc-eth0" Sep 13 02:42:16.991357 systemd-networkd[1498]: cali7a649e30ddd: Gained IPv6LL Sep 13 02:42:16.991812 systemd-networkd[1498]: cali6108b97c576: Gained IPv6LL Sep 13 02:42:17.010809 containerd[1592]: time="2025-09-13T02:42:17.010471378Z" level=info msg="connecting to shim c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26" address="unix:///run/containerd/s/1fa629fe1e27943b65400e705b676f6849aa2d895104e37158dc008d0057376d" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:17.055304 systemd-networkd[1498]: cali799effeb043: Gained IPv6LL Sep 13 02:42:17.154804 systemd[1]: Started cri-containerd-c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26.scope - libcontainer container c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26. Sep 13 02:42:17.372884 systemd-networkd[1498]: cali3bc78af853e: Link UP Sep 13 02:42:17.379972 systemd-networkd[1498]: cali3bc78af853e: Gained carrier Sep 13 02:42:17.411108 containerd[1592]: time="2025-09-13T02:42:17.410494709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8lgq,Uid:f66af8bf-59fd-4cb6-b3ee-fe9b3ef8cfad,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074\"" Sep 13 02:42:17.447257 containerd[1592]: time="2025-09-13T02:42:17.444796997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ddd8c889f-nwdcv,Uid:089a9cbc-2222-4810-a7b6-11adfc37f9f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2\"" Sep 13 02:42:17.452617 containerd[1592]: 2025-09-13 02:42:16.929 [INFO][4288] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:17.452617 containerd[1592]: 2025-09-13 02:42:16.985 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0 coredns-674b8bbfcf- kube-system 39391357-6cd5-4640-876b-98d0772daef3 821 0 2025-09-13 02:41:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com coredns-674b8bbfcf-w9fdx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3bc78af853e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-" Sep 13 02:42:17.452617 containerd[1592]: 2025-09-13 02:42:16.987 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.452617 containerd[1592]: 2025-09-13 02:42:17.145 [INFO][4333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" HandleID="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Workload="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.145 [INFO][4333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" HandleID="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Workload="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cc040), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-w9fdx", "timestamp":"2025-09-13 02:42:17.145239532 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.145 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.145 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.145 [INFO][4333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.187 [INFO][4333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.207 [INFO][4333] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.218 [INFO][4333] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.222 [INFO][4333] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.453119 containerd[1592]: 2025-09-13 02:42:17.237 [INFO][4333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.247 [INFO][4333] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.261 [INFO][4333] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5 Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.279 [INFO][4333] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.302 [INFO][4333] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.134/26] block=192.168.55.128/26 handle="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.309 [INFO][4333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.134/26] handle="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.311 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:17.454457 containerd[1592]: 2025-09-13 02:42:17.321 [INFO][4333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.134/26] IPv6=[] ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" HandleID="k8s-pod-network.3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Workload="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.454810 containerd[1592]: 2025-09-13 02:42:17.357 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39391357-6cd5-4640-876b-98d0772daef3", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-w9fdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3bc78af853e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:17.454810 containerd[1592]: 2025-09-13 02:42:17.358 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.134/32] ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.454810 containerd[1592]: 2025-09-13 02:42:17.358 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bc78af853e ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.454810 containerd[1592]: 2025-09-13 02:42:17.383 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.454810 containerd[1592]: 2025-09-13 02:42:17.393 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39391357-6cd5-4640-876b-98d0772daef3", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5", Pod:"coredns-674b8bbfcf-w9fdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3bc78af853e", MAC:"ce:b7:c1:13:f7:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:17.454810 containerd[1592]: 2025-09-13 02:42:17.423 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9fdx" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--w9fdx-eth0" Sep 13 02:42:17.542165 containerd[1592]: time="2025-09-13T02:42:17.542080147Z" level=info msg="connecting to shim 3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5" address="unix:///run/containerd/s/21d538fd66d5715bb0b461517e1b45a843e670283948c0ae8a4c45536aff00c8" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:17.586486 systemd-networkd[1498]: calif9f4eabfab4: Link UP Sep 13 02:42:17.598411 systemd-networkd[1498]: calif9f4eabfab4: Gained carrier Sep 13 02:42:17.627330 systemd[1]: Started cri-containerd-3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5.scope - libcontainer container 3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5. Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:16.951 [INFO][4297] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.006 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0 coredns-674b8bbfcf- kube-system 33a25dab-4bd4-4636-89ad-04ed566fe785 828 0 2025-09-13 02:41:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com coredns-674b8bbfcf-284nk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9f4eabfab4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.008 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.204 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" HandleID="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Workload="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.204 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" HandleID="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Workload="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374560), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-284nk", "timestamp":"2025-09-13 02:42:17.204050937 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.204 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.315 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.327 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.385 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.425 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.467 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.472 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.484 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.484 [INFO][4344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.495 [INFO][4344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.516 [INFO][4344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.540 [INFO][4344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.135/26] block=192.168.55.128/26 handle="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.541 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.135/26] handle="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.541 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:17.637722 containerd[1592]: 2025-09-13 02:42:17.542 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.135/26] IPv6=[] ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" HandleID="k8s-pod-network.b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Workload="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.640577 containerd[1592]: 2025-09-13 02:42:17.558 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"33a25dab-4bd4-4636-89ad-04ed566fe785", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-284nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9f4eabfab4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:17.640577 containerd[1592]: 2025-09-13 02:42:17.558 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.135/32] ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.640577 containerd[1592]: 2025-09-13 02:42:17.559 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9f4eabfab4 ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.640577 containerd[1592]: 2025-09-13 02:42:17.599 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.640577 containerd[1592]: 2025-09-13 02:42:17.601 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"33a25dab-4bd4-4636-89ad-04ed566fe785", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c", Pod:"coredns-674b8bbfcf-284nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9f4eabfab4", MAC:"4e:ff:c7:ed:f9:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:17.640577 containerd[1592]: 2025-09-13 02:42:17.626 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-284nk" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-coredns--674b8bbfcf--284nk-eth0" Sep 13 02:42:17.705916 containerd[1592]: time="2025-09-13T02:42:17.705847693Z" level=info msg="connecting to shim b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c" address="unix:///run/containerd/s/2a66f9c8e097dcd89f8fb848621adb00f21fda900a5fa070b1ff9098701eaf10" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:17.744707 containerd[1592]: time="2025-09-13T02:42:17.744578632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-7shds,Uid:3beedf30-6e52-4699-905c-e5a623f9e36f,Namespace:calico-apiserver,Attempt:0,}" Sep 13 02:42:17.782488 systemd[1]: Started cri-containerd-b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c.scope - libcontainer container b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c. Sep 13 02:42:17.819943 containerd[1592]: time="2025-09-13T02:42:17.819880045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9fdx,Uid:39391357-6cd5-4640-876b-98d0772daef3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5\"" Sep 13 02:42:17.841981 containerd[1592]: time="2025-09-13T02:42:17.841898775Z" level=info msg="CreateContainer within sandbox \"3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 02:42:17.888252 containerd[1592]: time="2025-09-13T02:42:17.888081756Z" level=info msg="Container db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:17.903385 containerd[1592]: time="2025-09-13T02:42:17.902770768Z" level=info msg="CreateContainer within sandbox \"3301fbd5a2c7c31665e6459d35e3a870e76f0351aead1cce12c28cc8472a62e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a\"" Sep 13 02:42:17.909316 containerd[1592]: time="2025-09-13T02:42:17.909250075Z" level=info msg="StartContainer for \"db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a\"" Sep 13 02:42:17.913079 containerd[1592]: time="2025-09-13T02:42:17.912869891Z" level=info msg="connecting to shim db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a" address="unix:///run/containerd/s/21d538fd66d5715bb0b461517e1b45a843e670283948c0ae8a4c45536aff00c8" protocol=ttrpc version=3 Sep 13 02:42:17.950434 containerd[1592]: time="2025-09-13T02:42:17.949813008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-284nk,Uid:33a25dab-4bd4-4636-89ad-04ed566fe785,Namespace:kube-system,Attempt:0,} returns sandbox id \"b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c\"" Sep 13 02:42:17.969765 containerd[1592]: time="2025-09-13T02:42:17.967936403Z" level=info msg="CreateContainer within sandbox \"b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 02:42:17.998659 containerd[1592]: time="2025-09-13T02:42:17.998596327Z" level=info msg="Container d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:18.016773 containerd[1592]: time="2025-09-13T02:42:18.016721648Z" level=info msg="CreateContainer within sandbox \"b417f9ebd1d805992ad81c742cd4dc95a151e24713d7ff6a2533e331a5475d1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15\"" Sep 13 02:42:18.024220 systemd[1]: Started cri-containerd-db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a.scope - libcontainer container db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a. Sep 13 02:42:18.027371 containerd[1592]: time="2025-09-13T02:42:18.026060110Z" level=info msg="StartContainer for \"d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15\"" Sep 13 02:42:18.034851 containerd[1592]: time="2025-09-13T02:42:18.034717099Z" level=info msg="connecting to shim d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15" address="unix:///run/containerd/s/2a66f9c8e097dcd89f8fb848621adb00f21fda900a5fa070b1ff9098701eaf10" protocol=ttrpc version=3 Sep 13 02:42:18.081987 systemd-networkd[1498]: cali5ad6449f062: Gained IPv6LL Sep 13 02:42:18.230718 systemd[1]: Started cri-containerd-d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15.scope - libcontainer container d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15. Sep 13 02:42:18.332063 containerd[1592]: time="2025-09-13T02:42:18.331959731Z" level=info msg="StartContainer for \"db9e9756fc2da63dada7dcbbf4b43df9e11bb367b54a1648d07dd64fd29da07a\" returns successfully" Sep 13 02:42:18.361835 containerd[1592]: time="2025-09-13T02:42:18.361762683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-t2tjc,Uid:8b6d8159-d9eb-461b-ab8c-1a1a3c2694dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26\"" Sep 13 02:42:18.438622 systemd-networkd[1498]: caliba465b4377e: Link UP Sep 13 02:42:18.442229 systemd-networkd[1498]: caliba465b4377e: Gained carrier Sep 13 02:42:18.463978 systemd-networkd[1498]: cali36bcb37b0ea: Gained IPv6LL Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:17.892 [INFO][4518] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:17.966 [INFO][4518] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0 calico-apiserver-6cc55dd8c9- calico-apiserver 3beedf30-6e52-4699-905c-e5a623f9e36f 825 0 2025-09-13 02:41:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc55dd8c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-m9tmw.gb1.brightbox.com calico-apiserver-6cc55dd8c9-7shds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba465b4377e [] [] }} ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:17.967 [INFO][4518] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.127 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" HandleID="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.127 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" HandleID="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-m9tmw.gb1.brightbox.com", "pod":"calico-apiserver-6cc55dd8c9-7shds", "timestamp":"2025-09-13 02:42:18.121798405 +0000 UTC"}, Hostname:"srv-m9tmw.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.127 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.139 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.139 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-m9tmw.gb1.brightbox.com' Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.241 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.273 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.312 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.321 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.335 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.335 [INFO][4567] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.343 [INFO][4567] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3 Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.366 [INFO][4567] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.400 [INFO][4567] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.136/26] block=192.168.55.128/26 handle="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.400 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.136/26] handle="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" host="srv-m9tmw.gb1.brightbox.com" Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.400 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 02:42:18.479421 containerd[1592]: 2025-09-13 02:42:18.400 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.136/26] IPv6=[] ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" HandleID="k8s-pod-network.4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Workload="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.480765 containerd[1592]: 2025-09-13 02:42:18.414 [INFO][4518] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0", GenerateName:"calico-apiserver-6cc55dd8c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3beedf30-6e52-4699-905c-e5a623f9e36f", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc55dd8c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6cc55dd8c9-7shds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba465b4377e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:18.480765 containerd[1592]: 2025-09-13 02:42:18.414 [INFO][4518] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.136/32] ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.480765 containerd[1592]: 2025-09-13 02:42:18.415 [INFO][4518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba465b4377e ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.480765 containerd[1592]: 2025-09-13 02:42:18.449 [INFO][4518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.480765 containerd[1592]: 2025-09-13 02:42:18.454 [INFO][4518] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0", GenerateName:"calico-apiserver-6cc55dd8c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3beedf30-6e52-4699-905c-e5a623f9e36f", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 2, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc55dd8c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-m9tmw.gb1.brightbox.com", ContainerID:"4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3", Pod:"calico-apiserver-6cc55dd8c9-7shds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba465b4377e", MAC:"26:a3:44:0e:26:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 02:42:18.480765 containerd[1592]: 2025-09-13 02:42:18.470 [INFO][4518] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc55dd8c9-7shds" WorkloadEndpoint="srv--m9tmw.gb1.brightbox.com-k8s-calico--apiserver--6cc55dd8c9--7shds-eth0" Sep 13 02:42:18.502322 containerd[1592]: time="2025-09-13T02:42:18.497881942Z" level=info msg="StartContainer for \"d806eecd6461f6bc9eefdd7fe1db48b3aec5fa2d853d04f4a943551cff3f8d15\" returns successfully" Sep 13 02:42:18.573418 containerd[1592]: time="2025-09-13T02:42:18.573183776Z" level=info msg="connecting to shim 4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3" address="unix:///run/containerd/s/4fc386695d9284215d8e0b5c5d8a9b7ba7d8d5a01db02e2ce27bc6c48424c899" namespace=k8s.io protocol=ttrpc version=3 Sep 13 02:42:18.690638 systemd[1]: Started cri-containerd-4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3.scope - libcontainer container 4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3. Sep 13 02:42:19.155181 containerd[1592]: time="2025-09-13T02:42:19.154877654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:19.157922 containerd[1592]: time="2025-09-13T02:42:19.157890500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 02:42:19.159506 containerd[1592]: time="2025-09-13T02:42:19.158898415Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:19.163734 containerd[1592]: time="2025-09-13T02:42:19.163648859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:19.165461 containerd[1592]: time="2025-09-13T02:42:19.164967252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.624704351s" Sep 13 02:42:19.165461 containerd[1592]: time="2025-09-13T02:42:19.165012543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 02:42:19.178282 containerd[1592]: time="2025-09-13T02:42:19.178236807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 02:42:19.218311 containerd[1592]: time="2025-09-13T02:42:19.218256277Z" level=info msg="CreateContainer within sandbox \"d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 02:42:19.231348 systemd-networkd[1498]: cali3bc78af853e: Gained IPv6LL Sep 13 02:42:19.243146 containerd[1592]: time="2025-09-13T02:42:19.242192124Z" level=info msg="Container 0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:19.261554 containerd[1592]: time="2025-09-13T02:42:19.261489394Z" level=info msg="CreateContainer within sandbox \"d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff\"" Sep 13 02:42:19.263746 containerd[1592]: time="2025-09-13T02:42:19.263689964Z" level=info msg="StartContainer for \"0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff\"" Sep 13 02:42:19.267983 containerd[1592]: time="2025-09-13T02:42:19.267831967Z" level=info msg="connecting to shim 0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff" address="unix:///run/containerd/s/c46889930e63ca9d2557f46c612f31069d86b4ec54818e8550e806bf9054301b" protocol=ttrpc version=3 Sep 13 02:42:19.295653 systemd-networkd[1498]: calif9f4eabfab4: Gained IPv6LL Sep 13 02:42:19.375236 systemd[1]: Started cri-containerd-0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff.scope - libcontainer container 0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff. Sep 13 02:42:19.462345 kubelet[2886]: I0913 02:42:19.460989 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w9fdx" podStartSLOduration=46.45591463 podStartE2EDuration="46.45591463s" podCreationTimestamp="2025-09-13 02:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:42:19.432624684 +0000 UTC m=+50.954053042" watchObservedRunningTime="2025-09-13 02:42:19.45591463 +0000 UTC m=+50.977342977" Sep 13 02:42:19.462345 kubelet[2886]: I0913 02:42:19.461414 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-284nk" podStartSLOduration=46.461398239 podStartE2EDuration="46.461398239s" podCreationTimestamp="2025-09-13 02:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:42:19.455294387 +0000 UTC m=+50.976722735" watchObservedRunningTime="2025-09-13 02:42:19.461398239 +0000 UTC m=+50.982826582" Sep 13 02:42:19.573375 containerd[1592]: time="2025-09-13T02:42:19.573140677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc55dd8c9-7shds,Uid:3beedf30-6e52-4699-905c-e5a623f9e36f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3\"" Sep 13 02:42:19.668415 containerd[1592]: time="2025-09-13T02:42:19.668197146Z" level=info msg="StartContainer for \"0a4b7f71f9842d0417519ad15308c8ea56769ce48c369f01b3339e12e79f9aff\" returns successfully" Sep 13 02:42:20.062913 containerd[1592]: time="2025-09-13T02:42:20.062753650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\" id:\"59809aad5e70a421c1dc971fa841db8ed6062564e4893e1c150bb0c8a731bb86\" pid:4180 exit_status:1 exited_at:{seconds:1757731340 nanos:60315154}" Sep 13 02:42:20.063354 systemd-networkd[1498]: caliba465b4377e: Gained IPv6LL Sep 13 02:42:20.752574 systemd-networkd[1498]: vxlan.calico: Link UP Sep 13 02:42:20.752590 systemd-networkd[1498]: vxlan.calico: Gained carrier Sep 13 02:42:21.983358 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Sep 13 02:42:23.386968 containerd[1592]: time="2025-09-13T02:42:23.386868320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:23.388751 containerd[1592]: time="2025-09-13T02:42:23.388237040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 02:42:23.389930 containerd[1592]: time="2025-09-13T02:42:23.389487594Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:23.392269 containerd[1592]: time="2025-09-13T02:42:23.392208035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:23.403776 containerd[1592]: time="2025-09-13T02:42:23.403607067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.22430842s" Sep 13 02:42:23.403776 containerd[1592]: time="2025-09-13T02:42:23.403660166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 02:42:23.426797 containerd[1592]: time="2025-09-13T02:42:23.426060409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 02:42:23.471282 containerd[1592]: time="2025-09-13T02:42:23.471062749Z" level=info msg="CreateContainer within sandbox \"3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 02:42:23.484065 containerd[1592]: time="2025-09-13T02:42:23.483107638Z" level=info msg="Container 1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:23.498059 containerd[1592]: time="2025-09-13T02:42:23.497982358Z" level=info msg="CreateContainer within sandbox \"3863ff01bf737e6c25241b0c396b8812aee1f7aa416b5e7bdfbf5a15acfedba1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\"" Sep 13 02:42:23.499508 containerd[1592]: time="2025-09-13T02:42:23.499477476Z" level=info msg="StartContainer for \"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\"" Sep 13 02:42:23.501390 containerd[1592]: time="2025-09-13T02:42:23.501357583Z" level=info msg="connecting to shim 1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7" address="unix:///run/containerd/s/a56c486e38563bc33ad10bf8ef6873ec4c52564624d5b2cd516c68f8916e09fa" protocol=ttrpc version=3 Sep 13 02:42:23.561440 systemd[1]: Started cri-containerd-1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7.scope - libcontainer container 1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7. Sep 13 02:42:23.673101 containerd[1592]: time="2025-09-13T02:42:23.672444265Z" level=info msg="StartContainer for \"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\" returns successfully" Sep 13 02:42:24.615254 containerd[1592]: time="2025-09-13T02:42:24.614999872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\" id:\"1ecc611aaf995fc7e3d1a1e62992eee5cb8c6b5f77c96ca45f0f4f591ce803d2\" pid:4923 exited_at:{seconds:1757731344 nanos:614479912}" Sep 13 02:42:24.648567 kubelet[2886]: I0913 02:42:24.648278 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c8b546f5c-hth6g" podStartSLOduration=28.059244931 podStartE2EDuration="34.648212895s" podCreationTimestamp="2025-09-13 02:41:50 +0000 UTC" firstStartedPulling="2025-09-13 02:42:16.815522925 +0000 UTC m=+48.336951253" lastFinishedPulling="2025-09-13 02:42:23.404490889 +0000 UTC m=+54.925919217" observedRunningTime="2025-09-13 02:42:24.546400899 +0000 UTC m=+56.067829250" watchObservedRunningTime="2025-09-13 02:42:24.648212895 +0000 UTC m=+56.169641231" Sep 13 02:42:26.997732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229700128.mount: Deactivated successfully. Sep 13 02:42:27.949462 containerd[1592]: time="2025-09-13T02:42:27.949383112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:27.951580 containerd[1592]: time="2025-09-13T02:42:27.951545993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 02:42:27.952844 containerd[1592]: time="2025-09-13T02:42:27.952803629Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:27.957384 containerd[1592]: time="2025-09-13T02:42:27.957239332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:27.958200 containerd[1592]: time="2025-09-13T02:42:27.957868512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.531752336s" Sep 13 02:42:27.958200 containerd[1592]: time="2025-09-13T02:42:27.957918120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 02:42:27.959726 containerd[1592]: time="2025-09-13T02:42:27.959671123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 02:42:27.968808 containerd[1592]: time="2025-09-13T02:42:27.967854653Z" level=info msg="CreateContainer within sandbox \"fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 02:42:28.022685 containerd[1592]: time="2025-09-13T02:42:28.020415183Z" level=info msg="Container a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:28.220510 containerd[1592]: time="2025-09-13T02:42:28.220232411Z" level=info msg="CreateContainer within sandbox \"fa440cf067fc1b79d04d889a76ee7bd4640cf20f289e8ebe21d54ed337e25074\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\"" Sep 13 02:42:28.228366 containerd[1592]: time="2025-09-13T02:42:28.226204216Z" level=info msg="StartContainer for \"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\"" Sep 13 02:42:28.329587 containerd[1592]: time="2025-09-13T02:42:28.329526787Z" level=info msg="connecting to shim a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5" address="unix:///run/containerd/s/7a2bcc1cdd2d4910ab9133778e44fa2398d0a829e83b73cf67bf686f9d801968" protocol=ttrpc version=3 Sep 13 02:42:28.394964 systemd[1]: Started cri-containerd-a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5.scope - libcontainer container a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5. Sep 13 02:42:28.596185 containerd[1592]: time="2025-09-13T02:42:28.596107752Z" level=info msg="StartContainer for \"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" returns successfully" Sep 13 02:42:29.701102 containerd[1592]: time="2025-09-13T02:42:29.700991230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" id:\"d812fa8713dc2a43077565c65ac640ab099b23a630b413a838e3742fca9c03b1\" pid:5001 exit_status:1 exited_at:{seconds:1757731349 nanos:699472169}" Sep 13 02:42:30.108673 containerd[1592]: time="2025-09-13T02:42:30.108594953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:30.110167 containerd[1592]: time="2025-09-13T02:42:30.109938387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 02:42:30.111156 containerd[1592]: time="2025-09-13T02:42:30.111091755Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:30.113942 containerd[1592]: time="2025-09-13T02:42:30.113865339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:30.115103 containerd[1592]: time="2025-09-13T02:42:30.114931366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.15489791s" Sep 13 02:42:30.115103 containerd[1592]: time="2025-09-13T02:42:30.114976412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 02:42:30.117266 containerd[1592]: time="2025-09-13T02:42:30.117226437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 02:42:30.122702 containerd[1592]: time="2025-09-13T02:42:30.122653966Z" level=info msg="CreateContainer within sandbox \"a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 02:42:30.135052 containerd[1592]: time="2025-09-13T02:42:30.133748906Z" level=info msg="Container edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:30.157056 containerd[1592]: time="2025-09-13T02:42:30.156984379Z" level=info msg="CreateContainer within sandbox \"a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9\"" Sep 13 02:42:30.159728 containerd[1592]: time="2025-09-13T02:42:30.159689561Z" level=info msg="StartContainer for \"edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9\"" Sep 13 02:42:30.161484 containerd[1592]: time="2025-09-13T02:42:30.161449802Z" level=info msg="connecting to shim edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9" address="unix:///run/containerd/s/6951a6e57e7039af03b275c8a90e13f318dcf9ed5a8469f4608a1dc396fb8364" protocol=ttrpc version=3 Sep 13 02:42:30.200355 systemd[1]: Started cri-containerd-edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9.scope - libcontainer container edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9. Sep 13 02:42:30.339138 containerd[1592]: time="2025-09-13T02:42:30.338860555Z" level=info msg="StartContainer for \"edab1ad4ef3e85fd8e24bd3121373e208467126f189963f9821f9107cc9588b9\" returns successfully" Sep 13 02:42:30.674847 containerd[1592]: time="2025-09-13T02:42:30.674785912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" id:\"df589d8ba7f56beca2048e0851feb55a5c69271f668599d83aaacca5eb020c09\" pid:5061 exited_at:{seconds:1757731350 nanos:674274651}" Sep 13 02:42:30.696324 kubelet[2886]: I0913 02:42:30.695950 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-d8lgq" podStartSLOduration=30.152523448 podStartE2EDuration="40.695929694s" podCreationTimestamp="2025-09-13 02:41:50 +0000 UTC" firstStartedPulling="2025-09-13 02:42:17.416072439 +0000 UTC m=+48.937500780" lastFinishedPulling="2025-09-13 02:42:27.959478698 +0000 UTC m=+59.480907026" observedRunningTime="2025-09-13 02:42:29.578422482 +0000 UTC m=+61.099850838" watchObservedRunningTime="2025-09-13 02:42:30.695929694 +0000 UTC m=+62.217358035" Sep 13 02:42:34.301543 containerd[1592]: time="2025-09-13T02:42:34.301435772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:34.303498 containerd[1592]: time="2025-09-13T02:42:34.303452290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 02:42:34.305331 containerd[1592]: time="2025-09-13T02:42:34.305230718Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:34.308557 containerd[1592]: time="2025-09-13T02:42:34.308517945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:34.310056 containerd[1592]: time="2025-09-13T02:42:34.309938238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.192671905s" Sep 13 02:42:34.310440 containerd[1592]: time="2025-09-13T02:42:34.310020638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 02:42:34.311773 containerd[1592]: time="2025-09-13T02:42:34.311724198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 02:42:34.318087 containerd[1592]: time="2025-09-13T02:42:34.317873202Z" level=info msg="CreateContainer within sandbox \"c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 02:42:34.330085 containerd[1592]: time="2025-09-13T02:42:34.328637922Z" level=info msg="Container 1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:34.340878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201598099.mount: Deactivated successfully. Sep 13 02:42:34.346611 containerd[1592]: time="2025-09-13T02:42:34.346491320Z" level=info msg="CreateContainer within sandbox \"c1a78d7efdbf3d538df5805ce44668cb390a82c70b4feff783189f9572ea6d26\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62\"" Sep 13 02:42:34.348013 containerd[1592]: time="2025-09-13T02:42:34.347981703Z" level=info msg="StartContainer for \"1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62\"" Sep 13 02:42:34.354703 containerd[1592]: time="2025-09-13T02:42:34.354596708Z" level=info msg="connecting to shim 1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62" address="unix:///run/containerd/s/1fa629fe1e27943b65400e705b676f6849aa2d895104e37158dc008d0057376d" protocol=ttrpc version=3 Sep 13 02:42:34.390238 systemd[1]: Started cri-containerd-1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62.scope - libcontainer container 1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62. Sep 13 02:42:34.469576 containerd[1592]: time="2025-09-13T02:42:34.469434594Z" level=info msg="StartContainer for \"1e75b09d15f17d28e31930d0d3b65b9b685485c4afc906370f9423c855eb2a62\" returns successfully" Sep 13 02:42:34.580646 kubelet[2886]: I0913 02:42:34.579746 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-t2tjc" podStartSLOduration=33.641037974 podStartE2EDuration="49.579726013s" podCreationTimestamp="2025-09-13 02:41:45 +0000 UTC" firstStartedPulling="2025-09-13 02:42:18.37280498 +0000 UTC m=+49.894233316" lastFinishedPulling="2025-09-13 02:42:34.311493021 +0000 UTC m=+65.832921355" observedRunningTime="2025-09-13 02:42:34.578160079 +0000 UTC m=+66.099588426" watchObservedRunningTime="2025-09-13 02:42:34.579726013 +0000 UTC m=+66.101154368" Sep 13 02:42:34.739056 containerd[1592]: time="2025-09-13T02:42:34.738797256Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:34.742720 containerd[1592]: time="2025-09-13T02:42:34.741758522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 02:42:34.747451 containerd[1592]: time="2025-09-13T02:42:34.747112157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 435.344975ms" Sep 13 02:42:34.747451 containerd[1592]: time="2025-09-13T02:42:34.747162498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 02:42:34.748681 containerd[1592]: time="2025-09-13T02:42:34.748632252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 02:42:34.757836 containerd[1592]: time="2025-09-13T02:42:34.757802531Z" level=info msg="CreateContainer within sandbox \"4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 02:42:34.768239 containerd[1592]: time="2025-09-13T02:42:34.768202972Z" level=info msg="Container 4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:34.786081 containerd[1592]: time="2025-09-13T02:42:34.785473473Z" level=info msg="CreateContainer within sandbox \"4c047d905d8dc217cf2a1868d578fa01e98c3fb68a60ff9715298db4a48367a3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b\"" Sep 13 02:42:34.786519 containerd[1592]: time="2025-09-13T02:42:34.786472623Z" level=info msg="StartContainer for \"4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b\"" Sep 13 02:42:34.789656 containerd[1592]: time="2025-09-13T02:42:34.789616827Z" level=info msg="connecting to shim 4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b" address="unix:///run/containerd/s/4fc386695d9284215d8e0b5c5d8a9b7ba7d8d5a01db02e2ce27bc6c48424c899" protocol=ttrpc version=3 Sep 13 02:42:34.828255 systemd[1]: Started cri-containerd-4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b.scope - libcontainer container 4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b. Sep 13 02:42:34.932501 containerd[1592]: time="2025-09-13T02:42:34.930801461Z" level=info msg="StartContainer for \"4de1cd81b577e5c9cfacbfa6cc580641dbc13d2ffd57da39ef4c7c5738526c5b\" returns successfully" Sep 13 02:42:37.001024 containerd[1592]: time="2025-09-13T02:42:37.000915185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:37.003375 containerd[1592]: time="2025-09-13T02:42:37.003253538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 02:42:37.007418 containerd[1592]: time="2025-09-13T02:42:37.007346862Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:37.030054 containerd[1592]: time="2025-09-13T02:42:37.028285406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:37.044267 containerd[1592]: time="2025-09-13T02:42:37.043771675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.287583585s" Sep 13 02:42:37.044760 containerd[1592]: time="2025-09-13T02:42:37.044326215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 02:42:37.062053 containerd[1592]: time="2025-09-13T02:42:37.061871907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 02:42:37.070741 containerd[1592]: time="2025-09-13T02:42:37.069944629Z" level=info msg="CreateContainer within sandbox \"d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 02:42:37.089151 containerd[1592]: time="2025-09-13T02:42:37.088316204Z" level=info msg="Container 68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:37.101338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972461715.mount: Deactivated successfully. Sep 13 02:42:37.157167 containerd[1592]: time="2025-09-13T02:42:37.157081524Z" level=info msg="CreateContainer within sandbox \"d9ad8ea43062829b91a472da2b56bcf376292a33e6101f3ba6c00071c2a1fa0d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764\"" Sep 13 02:42:37.162546 containerd[1592]: time="2025-09-13T02:42:37.161191631Z" level=info msg="StartContainer for \"68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764\"" Sep 13 02:42:37.167669 containerd[1592]: time="2025-09-13T02:42:37.167625644Z" level=info msg="connecting to shim 68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764" address="unix:///run/containerd/s/c46889930e63ca9d2557f46c612f31069d86b4ec54818e8550e806bf9054301b" protocol=ttrpc version=3 Sep 13 02:42:37.174822 kubelet[2886]: I0913 02:42:37.174652 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc55dd8c9-7shds" podStartSLOduration=37.00640226 podStartE2EDuration="52.174624591s" podCreationTimestamp="2025-09-13 02:41:45 +0000 UTC" firstStartedPulling="2025-09-13 02:42:19.580237649 +0000 UTC m=+51.101665978" lastFinishedPulling="2025-09-13 02:42:34.74845998 +0000 UTC m=+66.269888309" observedRunningTime="2025-09-13 02:42:35.593967339 +0000 UTC m=+67.115395687" watchObservedRunningTime="2025-09-13 02:42:37.174624591 +0000 UTC m=+68.696052939" Sep 13 02:42:37.237415 systemd[1]: Started cri-containerd-68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764.scope - libcontainer container 68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764. Sep 13 02:42:37.389820 containerd[1592]: time="2025-09-13T02:42:37.389743283Z" level=info msg="StartContainer for \"68d44a2ac7771bd48972f54e8957367656444196f4e19f078822170297bcc764\" returns successfully" Sep 13 02:42:37.995930 kubelet[2886]: I0913 02:42:37.995100 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gw4fm" podStartSLOduration=27.471429773 podStartE2EDuration="47.995071155s" podCreationTimestamp="2025-09-13 02:41:50 +0000 UTC" firstStartedPulling="2025-09-13 02:42:16.536309198 +0000 UTC m=+48.057737528" lastFinishedPulling="2025-09-13 02:42:37.05995057 +0000 UTC m=+68.581378910" observedRunningTime="2025-09-13 02:42:37.786669687 +0000 UTC m=+69.308098046" watchObservedRunningTime="2025-09-13 02:42:37.995071155 +0000 UTC m=+69.516499493" Sep 13 02:42:38.308104 kubelet[2886]: I0913 02:42:38.307700 2886 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 02:42:38.313717 kubelet[2886]: I0913 02:42:38.313586 2886 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 02:42:41.311940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182964566.mount: Deactivated successfully. Sep 13 02:42:41.337162 containerd[1592]: time="2025-09-13T02:42:41.336825462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:41.338952 containerd[1592]: time="2025-09-13T02:42:41.338611551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 02:42:41.338952 containerd[1592]: time="2025-09-13T02:42:41.338848065Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:41.341697 containerd[1592]: time="2025-09-13T02:42:41.341651344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 02:42:41.342926 containerd[1592]: time="2025-09-13T02:42:41.342889522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.280727426s" Sep 13 02:42:41.343087 containerd[1592]: time="2025-09-13T02:42:41.343057523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 02:42:41.367350 containerd[1592]: time="2025-09-13T02:42:41.367281262Z" level=info msg="CreateContainer within sandbox \"a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 02:42:41.389518 containerd[1592]: time="2025-09-13T02:42:41.389464594Z" level=info msg="Container 176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413: CDI devices from CRI Config.CDIDevices: []" Sep 13 02:42:41.411060 containerd[1592]: time="2025-09-13T02:42:41.410993404Z" level=info msg="CreateContainer within sandbox \"a2c2fe51a2ca69ef3059d7dbd26b874d4c59c06e12776f4ee60066b371b20dc2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413\"" Sep 13 02:42:41.413667 containerd[1592]: time="2025-09-13T02:42:41.413633538Z" level=info msg="StartContainer for \"176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413\"" Sep 13 02:42:41.416619 containerd[1592]: time="2025-09-13T02:42:41.416544814Z" level=info msg="connecting to shim 176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413" address="unix:///run/containerd/s/6951a6e57e7039af03b275c8a90e13f318dcf9ed5a8469f4608a1dc396fb8364" protocol=ttrpc version=3 Sep 13 02:42:41.487663 systemd[1]: Started cri-containerd-176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413.scope - libcontainer container 176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413. Sep 13 02:42:41.724908 containerd[1592]: time="2025-09-13T02:42:41.724124252Z" level=info msg="StartContainer for \"176e078190294d03b742e1131955c0cbea263d074a90a9ab5bbe55bbe748e413\" returns successfully" Sep 13 02:42:42.812197 kubelet[2886]: I0913 02:42:42.812089 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6ddd8c889f-nwdcv" podStartSLOduration=3.9167796360000002 podStartE2EDuration="27.812060914s" podCreationTimestamp="2025-09-13 02:42:15 +0000 UTC" firstStartedPulling="2025-09-13 02:42:17.450407004 +0000 UTC m=+48.971835333" lastFinishedPulling="2025-09-13 02:42:41.345688282 +0000 UTC m=+72.867116611" observedRunningTime="2025-09-13 02:42:42.809574483 +0000 UTC m=+74.331002833" watchObservedRunningTime="2025-09-13 02:42:42.812060914 +0000 UTC m=+74.333489257" Sep 13 02:42:46.660918 containerd[1592]: time="2025-09-13T02:42:46.660055961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\" id:\"be6774049bc29c85fc96d591e944d8ac87cb18b8bdd3b4dc6a5ebe2c16bb8573\" pid:5268 exited_at:{seconds:1757731366 nanos:622865316}" Sep 13 02:42:54.646704 containerd[1592]: time="2025-09-13T02:42:54.646612067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\" id:\"5700337602341ea322fec2078e2bb69ec0c7764e5e7a3922353b91aa2bebf8b1\" pid:5298 exited_at:{seconds:1757731374 nanos:644227842}" Sep 13 02:42:57.099451 systemd[1]: Started sshd@9-10.230.23.130:22-139.178.89.65:54666.service - OpenSSH per-connection server daemon (139.178.89.65:54666). Sep 13 02:42:58.168152 sshd[5311]: Accepted publickey for core from 139.178.89.65 port 54666 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:42:58.172515 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:42:58.196103 systemd-logind[1564]: New session 12 of user core. Sep 13 02:42:58.208218 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 02:42:59.552274 sshd[5316]: Connection closed by 139.178.89.65 port 54666 Sep 13 02:42:59.552739 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Sep 13 02:42:59.562924 systemd[1]: sshd@9-10.230.23.130:22-139.178.89.65:54666.service: Deactivated successfully. Sep 13 02:42:59.569633 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 02:42:59.572351 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Sep 13 02:42:59.574792 systemd-logind[1564]: Removed session 12. Sep 13 02:43:01.465838 containerd[1592]: time="2025-09-13T02:43:01.465774749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" id:\"6db7924c869706f79f6888b4c3632be3d0762ebc77b1fda2fd358befe6223a44\" pid:5344 exited_at:{seconds:1757731381 nanos:463936122}" Sep 13 02:43:04.734965 systemd[1]: Started sshd@10-10.230.23.130:22-139.178.89.65:38140.service - OpenSSH per-connection server daemon (139.178.89.65:38140). Sep 13 02:43:05.820741 sshd[5366]: Accepted publickey for core from 139.178.89.65 port 38140 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:05.826636 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:05.844996 systemd-logind[1564]: New session 13 of user core. Sep 13 02:43:05.851255 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 02:43:06.951962 sshd[5368]: Connection closed by 139.178.89.65 port 38140 Sep 13 02:43:06.952549 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:06.962892 systemd[1]: sshd@10-10.230.23.130:22-139.178.89.65:38140.service: Deactivated successfully. Sep 13 02:43:06.967768 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 02:43:06.973532 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Sep 13 02:43:06.977327 systemd-logind[1564]: Removed session 13. Sep 13 02:43:12.109570 systemd[1]: Started sshd@11-10.230.23.130:22-139.178.89.65:53708.service - OpenSSH per-connection server daemon (139.178.89.65:53708). Sep 13 02:43:13.023522 sshd[5380]: Accepted publickey for core from 139.178.89.65 port 53708 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:13.027169 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:13.045856 systemd-logind[1564]: New session 14 of user core. Sep 13 02:43:13.051443 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 02:43:13.807356 sshd[5382]: Connection closed by 139.178.89.65 port 53708 Sep 13 02:43:13.810479 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:13.816665 systemd[1]: sshd@11-10.230.23.130:22-139.178.89.65:53708.service: Deactivated successfully. Sep 13 02:43:13.821556 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 02:43:13.825970 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Sep 13 02:43:13.828775 systemd-logind[1564]: Removed session 14. Sep 13 02:43:13.970006 systemd[1]: Started sshd@12-10.230.23.130:22-139.178.89.65:53712.service - OpenSSH per-connection server daemon (139.178.89.65:53712). Sep 13 02:43:14.903323 containerd[1592]: time="2025-09-13T02:43:14.903023824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" id:\"d5a83a03acc8a17f9cadc64e8f880b2f12256e656e1910140faf9984a93eadc1\" pid:5410 exited_at:{seconds:1757731394 nanos:899517745}" Sep 13 02:43:14.930102 sshd[5395]: Accepted publickey for core from 139.178.89.65 port 53712 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:14.933969 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:14.944007 systemd-logind[1564]: New session 15 of user core. Sep 13 02:43:14.954925 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 02:43:15.092914 containerd[1592]: time="2025-09-13T02:43:15.092841070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\" id:\"bae1232165c2ccb9b2e0e30dbbcc23e20ca196e1268f4eb0a8a2fd7c17d00e63\" pid:5434 exited_at:{seconds:1757731395 nanos:92355641}" Sep 13 02:43:15.890155 sshd[5421]: Connection closed by 139.178.89.65 port 53712 Sep 13 02:43:15.890705 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:15.898768 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Sep 13 02:43:15.900140 systemd[1]: sshd@12-10.230.23.130:22-139.178.89.65:53712.service: Deactivated successfully. Sep 13 02:43:15.904524 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 02:43:15.911748 systemd-logind[1564]: Removed session 15. Sep 13 02:43:16.068202 systemd[1]: Started sshd@13-10.230.23.130:22-139.178.89.65:53720.service - OpenSSH per-connection server daemon (139.178.89.65:53720). Sep 13 02:43:16.598693 containerd[1592]: time="2025-09-13T02:43:16.598589936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\" id:\"86afe74e6a548da3791b33a8b5b318a62cc2015a1b75b2770fbd6af8cd939dea\" pid:5469 exited_at:{seconds:1757731396 nanos:597361623}" Sep 13 02:43:17.079218 sshd[5454]: Accepted publickey for core from 139.178.89.65 port 53720 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:17.086583 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:17.101087 systemd-logind[1564]: New session 16 of user core. Sep 13 02:43:17.104390 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 02:43:17.985777 sshd[5480]: Connection closed by 139.178.89.65 port 53720 Sep 13 02:43:17.987407 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:17.994854 systemd[1]: sshd@13-10.230.23.130:22-139.178.89.65:53720.service: Deactivated successfully. Sep 13 02:43:18.001849 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 02:43:18.006446 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Sep 13 02:43:18.008990 systemd-logind[1564]: Removed session 16. Sep 13 02:43:23.148695 systemd[1]: Started sshd@14-10.230.23.130:22-139.178.89.65:49908.service - OpenSSH per-connection server daemon (139.178.89.65:49908). Sep 13 02:43:24.152565 sshd[5496]: Accepted publickey for core from 139.178.89.65 port 49908 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:24.156876 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:24.170310 systemd-logind[1564]: New session 17 of user core. Sep 13 02:43:24.177524 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 02:43:24.948456 containerd[1592]: time="2025-09-13T02:43:24.948376670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\" id:\"888ac5fccc09328312d5e3a1854f52ba4e96d08052f2baef895bf88cc7efebf0\" pid:5511 exited_at:{seconds:1757731404 nanos:938297308}" Sep 13 02:43:25.256712 sshd[5498]: Connection closed by 139.178.89.65 port 49908 Sep 13 02:43:25.256713 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:25.274440 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Sep 13 02:43:25.276698 systemd[1]: sshd@14-10.230.23.130:22-139.178.89.65:49908.service: Deactivated successfully. Sep 13 02:43:25.282682 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 02:43:25.289462 systemd-logind[1564]: Removed session 17. Sep 13 02:43:30.414915 systemd[1]: Started sshd@15-10.230.23.130:22-139.178.89.65:48614.service - OpenSSH per-connection server daemon (139.178.89.65:48614). Sep 13 02:43:31.017807 containerd[1592]: time="2025-09-13T02:43:31.017296077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" id:\"b399c952915b90e9475fb0e4be45e9bb20592ffe7c4ef0ce2b81299e1740ae34\" pid:5548 exited_at:{seconds:1757731411 nanos:16292452}" Sep 13 02:43:31.418064 sshd[5533]: Accepted publickey for core from 139.178.89.65 port 48614 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:31.421493 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:31.435301 systemd-logind[1564]: New session 18 of user core. Sep 13 02:43:31.443241 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 02:43:32.351152 sshd[5558]: Connection closed by 139.178.89.65 port 48614 Sep 13 02:43:32.357514 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:32.364424 systemd[1]: sshd@15-10.230.23.130:22-139.178.89.65:48614.service: Deactivated successfully. Sep 13 02:43:32.368657 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 02:43:32.372669 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Sep 13 02:43:32.376637 systemd-logind[1564]: Removed session 18. Sep 13 02:43:37.512670 systemd[1]: Started sshd@16-10.230.23.130:22-139.178.89.65:48620.service - OpenSSH per-connection server daemon (139.178.89.65:48620). Sep 13 02:43:38.546090 sshd[5572]: Accepted publickey for core from 139.178.89.65 port 48620 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:38.548667 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:38.557608 systemd-logind[1564]: New session 19 of user core. Sep 13 02:43:38.563241 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 02:43:39.683129 sshd[5574]: Connection closed by 139.178.89.65 port 48620 Sep 13 02:43:39.682967 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:39.695732 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Sep 13 02:43:39.696468 systemd[1]: sshd@16-10.230.23.130:22-139.178.89.65:48620.service: Deactivated successfully. Sep 13 02:43:39.699733 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 02:43:39.705023 systemd-logind[1564]: Removed session 19. Sep 13 02:43:39.838391 systemd[1]: Started sshd@17-10.230.23.130:22-139.178.89.65:48624.service - OpenSSH per-connection server daemon (139.178.89.65:48624). Sep 13 02:43:40.841090 sshd[5585]: Accepted publickey for core from 139.178.89.65 port 48624 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:40.843210 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:40.853743 systemd-logind[1564]: New session 20 of user core. Sep 13 02:43:40.861313 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 02:43:42.017350 sshd[5587]: Connection closed by 139.178.89.65 port 48624 Sep 13 02:43:42.020484 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:42.033374 systemd[1]: sshd@17-10.230.23.130:22-139.178.89.65:48624.service: Deactivated successfully. Sep 13 02:43:42.038505 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 02:43:42.042692 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Sep 13 02:43:42.047279 systemd-logind[1564]: Removed session 20. Sep 13 02:43:42.171174 systemd[1]: Started sshd@18-10.230.23.130:22-139.178.89.65:39246.service - OpenSSH per-connection server daemon (139.178.89.65:39246). Sep 13 02:43:43.138611 sshd[5606]: Accepted publickey for core from 139.178.89.65 port 39246 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:43.140983 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:43.150350 systemd-logind[1564]: New session 21 of user core. Sep 13 02:43:43.158279 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 02:43:45.068506 sshd[5608]: Connection closed by 139.178.89.65 port 39246 Sep 13 02:43:45.071375 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:45.089806 systemd[1]: sshd@18-10.230.23.130:22-139.178.89.65:39246.service: Deactivated successfully. Sep 13 02:43:45.095499 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 02:43:45.099669 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Sep 13 02:43:45.103290 systemd-logind[1564]: Removed session 21. Sep 13 02:43:45.223634 systemd[1]: Started sshd@19-10.230.23.130:22-139.178.89.65:39248.service - OpenSSH per-connection server daemon (139.178.89.65:39248). Sep 13 02:43:46.226573 sshd[5633]: Accepted publickey for core from 139.178.89.65 port 39248 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:46.232108 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:46.240226 systemd-logind[1564]: New session 22 of user core. Sep 13 02:43:46.249790 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 02:43:46.924784 containerd[1592]: time="2025-09-13T02:43:46.915715750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f28f7c62111f5496de0bfafc826d82f54ed5cf1442a6265476c6685474de3c3\" id:\"5c65ab081535671d879ce1b9b8892ab16b51051d6f15f17fd3adfc2c5d6e3f22\" pid:5650 exited_at:{seconds:1757731426 nanos:831197965}" Sep 13 02:43:47.866276 sshd[5636]: Connection closed by 139.178.89.65 port 39248 Sep 13 02:43:47.867728 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:47.890249 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Sep 13 02:43:47.892012 systemd[1]: sshd@19-10.230.23.130:22-139.178.89.65:39248.service: Deactivated successfully. Sep 13 02:43:47.900372 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 02:43:47.901135 systemd[1]: session-22.scope: Consumed 647ms CPU time, 77.8M memory peak. Sep 13 02:43:47.907980 systemd-logind[1564]: Removed session 22. Sep 13 02:43:48.030579 systemd[1]: Started sshd@20-10.230.23.130:22-139.178.89.65:39258.service - OpenSSH per-connection server daemon (139.178.89.65:39258). Sep 13 02:43:49.018073 sshd[5671]: Accepted publickey for core from 139.178.89.65 port 39258 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:49.022746 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:49.037931 systemd-logind[1564]: New session 23 of user core. Sep 13 02:43:49.044240 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 02:43:49.868949 sshd[5673]: Connection closed by 139.178.89.65 port 39258 Sep 13 02:43:49.870127 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:49.877082 systemd[1]: sshd@20-10.230.23.130:22-139.178.89.65:39258.service: Deactivated successfully. Sep 13 02:43:49.882179 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 02:43:49.884813 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Sep 13 02:43:49.886592 systemd-logind[1564]: Removed session 23. Sep 13 02:43:54.945067 containerd[1592]: time="2025-09-13T02:43:54.944291041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bcb652b9cc1ebd3224b185c4cc3b5552112e0242323d3eeb7fd5f025f89aaf7\" id:\"c0ea3ecf31811b92f8cc6b24165dc1e88defde119febca6d7a130ad5db2732df\" pid:5710 exited_at:{seconds:1757731434 nanos:942692832}" Sep 13 02:43:55.024872 systemd[1]: Started sshd@21-10.230.23.130:22-139.178.89.65:46782.service - OpenSSH per-connection server daemon (139.178.89.65:46782). Sep 13 02:43:55.974321 sshd[5723]: Accepted publickey for core from 139.178.89.65 port 46782 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:43:55.976727 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:43:55.985233 systemd-logind[1564]: New session 24 of user core. Sep 13 02:43:55.998123 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 02:43:56.813243 sshd[5725]: Connection closed by 139.178.89.65 port 46782 Sep 13 02:43:56.816638 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Sep 13 02:43:56.824109 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Sep 13 02:43:56.825378 systemd[1]: sshd@21-10.230.23.130:22-139.178.89.65:46782.service: Deactivated successfully. Sep 13 02:43:56.829882 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 02:43:56.833759 systemd-logind[1564]: Removed session 24. Sep 13 02:44:00.938509 containerd[1592]: time="2025-09-13T02:44:00.938341357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6317b02d69e4dffa4847e6ffc2c14d99f924e1c12f29bb0e33150c929f32ff5\" id:\"bcde567dfb83d83b905f2131fa6db2b41f8d7bee3f4e007ae03542cd1f70c5cd\" pid:5755 exited_at:{seconds:1757731440 nanos:937796605}" Sep 13 02:44:01.972146 systemd[1]: Started sshd@22-10.230.23.130:22-139.178.89.65:57582.service - OpenSSH per-connection server daemon (139.178.89.65:57582). Sep 13 02:44:02.997813 sshd[5765]: Accepted publickey for core from 139.178.89.65 port 57582 ssh2: RSA SHA256:TXj6QjDDe3r+3pW1NiK6yCZ5ZwfeWHeexMio95l14ss Sep 13 02:44:03.001791 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 02:44:03.013111 systemd-logind[1564]: New session 25 of user core. Sep 13 02:44:03.020263 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 02:44:03.841488 sshd[5767]: Connection closed by 139.178.89.65 port 57582 Sep 13 02:44:03.843420 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Sep 13 02:44:03.856144 systemd[1]: sshd@22-10.230.23.130:22-139.178.89.65:57582.service: Deactivated successfully. Sep 13 02:44:03.856496 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Sep 13 02:44:03.861508 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 02:44:03.865108 systemd-logind[1564]: Removed session 25.