Mar 2 13:29:35.972570 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 10:28:24 -00 2026 Mar 2 13:29:35.972623 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 13:29:35.972638 kernel: BIOS-provided physical RAM map: Mar 2 13:29:35.972649 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 13:29:35.972663 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 13:29:35.972673 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 13:29:35.972685 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 2 13:29:35.972696 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 2 13:29:35.972735 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:29:35.972746 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 13:29:35.972756 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:29:35.972767 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 13:29:35.972783 kernel: NX (Execute Disable) protection: active Mar 2 13:29:35.972800 kernel: APIC: Static calls initialized Mar 2 13:29:35.972813 kernel: SMBIOS 2.8 present. Mar 2 13:29:35.972825 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 2 13:29:35.972836 kernel: DMI: Memory slots populated: 1/1 Mar 2 13:29:35.972847 kernel: Hypervisor detected: KVM Mar 2 13:29:35.972859 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 2 13:29:35.972874 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:29:35.972898 kernel: kvm-clock: using sched offset of 5924810329 cycles Mar 2 13:29:35.972911 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:29:35.972923 kernel: tsc: Detected 2500.032 MHz processor Mar 2 13:29:35.972934 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:29:35.972946 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:29:35.972957 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 2 13:29:35.972969 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 13:29:35.972981 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:29:35.972998 kernel: Using GB pages for direct mapping Mar 2 13:29:35.973009 kernel: ACPI: Early table checksum verification disabled Mar 2 13:29:35.973021 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 2 13:29:35.973033 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973044 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973056 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973067 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 2 13:29:35.973079 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973090 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973106 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973118 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:29:35.973130 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 2 13:29:35.973147 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 2 13:29:35.973159 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 2 13:29:35.973171 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 2 13:29:35.973187 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 2 13:29:35.973199 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 2 13:29:35.973212 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 2 13:29:35.973224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 2 13:29:35.973236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 2 13:29:35.973248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 2 13:29:35.973260 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Mar 2 13:29:35.973272 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Mar 2 13:29:35.973288 kernel: Zone ranges: Mar 2 13:29:35.973300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:29:35.973312 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 2 13:29:35.973324 kernel: Normal empty Mar 2 13:29:35.973336 kernel: Device empty Mar 2 13:29:35.973348 kernel: Movable zone start for each node Mar 2 13:29:35.973360 kernel: Early memory node ranges Mar 2 13:29:35.973372 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 13:29:35.973384 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 2 13:29:35.973400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 2 13:29:35.973412 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:29:35.973424 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 13:29:35.973436 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 2 13:29:35.973448 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:29:35.973468 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:29:35.973484 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:29:35.973497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:29:35.973509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:29:35.973521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:29:35.973539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:29:35.973551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:29:35.973569 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:29:35.973581 kernel: TSC deadline timer available Mar 2 13:29:35.973593 kernel: CPU topo: Max. logical packages: 16 Mar 2 13:29:35.973605 kernel: CPU topo: Max. logical dies: 16 Mar 2 13:29:35.973617 kernel: CPU topo: Max. dies per package: 1 Mar 2 13:29:35.973628 kernel: CPU topo: Max. threads per core: 1 Mar 2 13:29:35.973640 kernel: CPU topo: Num. cores per package: 1 Mar 2 13:29:35.973657 kernel: CPU topo: Num. threads per package: 1 Mar 2 13:29:35.973669 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Mar 2 13:29:35.973681 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:29:35.973693 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 13:29:35.976738 kernel: Booting paravirtualized kernel on KVM Mar 2 13:29:35.976754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:29:35.976766 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 2 13:29:35.976779 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Mar 2 13:29:35.976791 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Mar 2 13:29:35.976810 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 2 13:29:35.976822 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:29:35.976834 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:29:35.976848 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 13:29:35.976861 kernel: random: crng init done Mar 2 13:29:35.976873 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:29:35.976897 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 2 13:29:35.976909 kernel: Fallback order for Node 0: 0 Mar 2 13:29:35.976927 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Mar 2 13:29:35.976939 kernel: Policy zone: DMA32 Mar 2 13:29:35.976951 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:29:35.976964 kernel: software IO TLB: area num 16. Mar 2 13:29:35.976976 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 2 13:29:35.976988 kernel: Kernel/User page tables isolation: enabled Mar 2 13:29:35.977000 kernel: ftrace: allocating 40099 entries in 157 pages Mar 2 13:29:35.977012 kernel: ftrace: allocated 157 pages with 5 groups Mar 2 13:29:35.977024 kernel: Dynamic Preempt: voluntary Mar 2 13:29:35.977040 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:29:35.977053 kernel: rcu: RCU event tracing is enabled. Mar 2 13:29:35.977065 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 2 13:29:35.977078 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:29:35.977090 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:29:35.977102 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:29:35.977114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:29:35.977126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 2 13:29:35.977138 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 2 13:29:35.977155 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 2 13:29:35.977167 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 2 13:29:35.977179 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 2 13:29:35.977191 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:29:35.977214 kernel: Console: colour VGA+ 80x25 Mar 2 13:29:35.977230 kernel: printk: legacy console [tty0] enabled Mar 2 13:29:35.977243 kernel: printk: legacy console [ttyS0] enabled Mar 2 13:29:35.977255 kernel: ACPI: Core revision 20240827 Mar 2 13:29:35.977268 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:29:35.977280 kernel: x2apic enabled Mar 2 13:29:35.977293 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:29:35.977306 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Mar 2 13:29:35.977323 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Mar 2 13:29:35.977336 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:29:35.977349 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 2 13:29:35.977361 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 2 13:29:35.977374 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:29:35.977390 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:29:35.977403 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:29:35.977415 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 2 13:29:35.977427 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 2 13:29:35.977440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 2 13:29:35.977452 kernel: MDS: Mitigation: Clear CPU buffers Mar 2 13:29:35.977464 kernel: MMIO Stale Data: Unknown: No mitigations Mar 2 13:29:35.977477 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 2 13:29:35.977489 kernel: active return thunk: its_return_thunk Mar 2 13:29:35.977506 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 2 13:29:35.977518 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:29:35.977535 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:29:35.977547 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:29:35.977559 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:29:35.977572 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 2 13:29:35.977584 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:29:35.977596 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:29:35.977609 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 2 13:29:35.977621 kernel: landlock: Up and running. Mar 2 13:29:35.977633 kernel: SELinux: Initializing. Mar 2 13:29:35.977646 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 2 13:29:35.977658 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 2 13:29:35.977671 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 2 13:29:35.977688 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 2 13:29:35.977714 kernel: signal: max sigframe size: 1776 Mar 2 13:29:35.977728 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:29:35.977741 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:29:35.977754 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Mar 2 13:29:35.977766 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:29:35.977779 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:29:35.977795 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:29:35.977808 kernel: .... node #0, CPUs: #1 Mar 2 13:29:35.977827 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:29:35.977839 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Mar 2 13:29:35.977853 kernel: Memory: 1887484K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 203116K reserved, 0K cma-reserved) Mar 2 13:29:35.977866 kernel: devtmpfs: initialized Mar 2 13:29:35.977878 kernel: x86/mm: Memory block size: 128MB Mar 2 13:29:35.977901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:29:35.977914 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 2 13:29:35.977927 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:29:35.977939 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:29:35.977957 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:29:35.977970 kernel: audit: type=2000 audit(1772458172.307:1): state=initialized audit_enabled=0 res=1 Mar 2 13:29:35.977983 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:29:35.977995 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:29:35.978008 kernel: cpuidle: using governor menu Mar 2 13:29:35.978020 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:29:35.978033 kernel: dca service started, version 1.12.1 Mar 2 13:29:35.978046 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 2 13:29:35.978058 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:29:35.978075 kernel: PCI: Using configuration type 1 for base access Mar 2 13:29:35.978088 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:29:35.978101 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:29:35.978114 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:29:35.978126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:29:35.978139 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:29:35.978151 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:29:35.978164 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:29:35.978176 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:29:35.978193 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:29:35.978206 kernel: ACPI: Interpreter enabled Mar 2 13:29:35.978219 kernel: ACPI: PM: (supports S0 S5) Mar 2 13:29:35.978231 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:29:35.978244 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:29:35.978257 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:29:35.978269 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:29:35.978282 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:29:35.978596 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:29:35.982851 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 2 13:29:35.983038 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 2 13:29:35.983060 kernel: PCI host bridge to bus 0000:00 Mar 2 13:29:35.983264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:29:35.983412 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:29:35.983555 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:29:35.983744 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 2 13:29:35.983907 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:29:35.984054 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 2 13:29:35.984197 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:29:35.984391 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 2 13:29:35.984594 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Mar 2 13:29:35.987859 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Mar 2 13:29:35.988046 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Mar 2 13:29:35.988208 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Mar 2 13:29:35.988367 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:29:35.988578 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:35.990801 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Mar 2 13:29:35.991002 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 2 13:29:35.991177 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 2 13:29:35.991342 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 2 13:29:35.991544 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:35.993755 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Mar 2 13:29:35.993965 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 2 13:29:35.994132 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 2 13:29:35.994292 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 2 13:29:35.994488 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:35.994648 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Mar 2 13:29:35.994834 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 2 13:29:35.995017 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 2 13:29:35.995175 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 2 13:29:35.995352 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:35.995511 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Mar 2 13:29:35.995685 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 2 13:29:35.998922 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 2 13:29:35.999088 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 2 13:29:35.999284 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:35.999445 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Mar 2 13:29:35.999602 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 2 13:29:35.999781 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 2 13:29:35.999970 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 2 13:29:36.000158 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:36.000425 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Mar 2 13:29:36.000682 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 2 13:29:36.002874 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 2 13:29:36.003049 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 2 13:29:36.003242 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:36.003435 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Mar 2 13:29:36.003609 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 2 13:29:36.004820 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 2 13:29:36.004996 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 2 13:29:36.005200 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 2 13:29:36.005371 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Mar 2 13:29:36.005525 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 2 13:29:36.005671 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 2 13:29:36.005895 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 2 13:29:36.006089 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 2 13:29:36.006249 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Mar 2 13:29:36.006406 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Mar 2 13:29:36.006562 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Mar 2 13:29:36.006737 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Mar 2 13:29:36.006949 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 2 13:29:36.007109 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Mar 2 13:29:36.007264 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Mar 2 13:29:36.007419 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Mar 2 13:29:36.007629 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 2 13:29:36.007824 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:29:36.008022 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 2 13:29:36.008190 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Mar 2 13:29:36.008347 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Mar 2 13:29:36.008534 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 2 13:29:36.008693 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 2 13:29:36.008927 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Mar 2 13:29:36.009093 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Mar 2 13:29:36.009264 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 2 13:29:36.009425 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 2 13:29:36.009584 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 2 13:29:36.009840 kernel: pci_bus 0000:02: extended config space not accessible Mar 2 13:29:36.010046 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Mar 2 13:29:36.010217 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Mar 2 13:29:36.010380 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 2 13:29:36.010580 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Mar 2 13:29:36.010765 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Mar 2 13:29:36.010940 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 2 13:29:36.011137 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Mar 2 13:29:36.011303 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Mar 2 13:29:36.011460 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 2 13:29:36.011635 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 2 13:29:36.011832 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 2 13:29:36.012006 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 2 13:29:36.012165 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 2 13:29:36.012321 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 2 13:29:36.012342 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:29:36.012355 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:29:36.012368 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:29:36.012388 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:29:36.012401 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:29:36.012414 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:29:36.012427 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:29:36.012440 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:29:36.012453 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:29:36.012465 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:29:36.012478 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:29:36.012491 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:29:36.012508 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:29:36.012521 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:29:36.012534 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:29:36.012547 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:29:36.012559 kernel: iommu: Default domain type: Translated Mar 2 13:29:36.012572 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:29:36.012585 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:29:36.012598 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:29:36.012610 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 13:29:36.012627 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 2 13:29:36.012802 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:29:36.012974 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:29:36.013129 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:29:36.013149 kernel: vgaarb: loaded Mar 2 13:29:36.013162 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:29:36.013175 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:29:36.013188 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:29:36.013208 kernel: pnp: PnP ACPI init Mar 2 13:29:36.013409 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:29:36.013430 kernel: pnp: PnP ACPI: found 5 devices Mar 2 13:29:36.013444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:29:36.013468 kernel: NET: Registered PF_INET protocol family Mar 2 13:29:36.013482 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:29:36.013495 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 2 13:29:36.013508 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:29:36.013527 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 2 13:29:36.013541 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 2 13:29:36.013562 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 2 13:29:36.013574 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 2 13:29:36.013587 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 2 13:29:36.013600 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:29:36.013613 kernel: NET: Registered PF_XDP protocol family Mar 2 13:29:36.013798 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 2 13:29:36.013972 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 2 13:29:36.014137 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 2 13:29:36.014293 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 2 13:29:36.014462 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 2 13:29:36.014619 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 2 13:29:36.014796 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 2 13:29:36.014968 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 2 13:29:36.015126 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Mar 2 13:29:36.015282 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Mar 2 13:29:36.015448 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Mar 2 13:29:36.015610 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Mar 2 13:29:36.015811 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Mar 2 13:29:36.015995 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Mar 2 13:29:36.016164 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Mar 2 13:29:36.016320 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Mar 2 13:29:36.016481 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 2 13:29:36.016672 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 2 13:29:36.016849 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 2 13:29:36.017018 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 2 13:29:36.017188 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 2 13:29:36.017362 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 2 13:29:36.017521 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 2 13:29:36.017689 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 2 13:29:36.017900 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 2 13:29:36.018068 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 2 13:29:36.018227 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 2 13:29:36.018396 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 2 13:29:36.018554 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 2 13:29:36.018742 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 2 13:29:36.018915 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 2 13:29:36.019073 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 2 13:29:36.019229 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 2 13:29:36.019395 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 2 13:29:36.019551 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 2 13:29:36.019735 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 2 13:29:36.019911 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 2 13:29:36.020071 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 2 13:29:36.020237 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 2 13:29:36.020394 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 2 13:29:36.020551 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 2 13:29:36.020724 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 2 13:29:36.020893 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 2 13:29:36.021055 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 2 13:29:36.021212 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 2 13:29:36.021369 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 2 13:29:36.021525 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 2 13:29:36.021689 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 2 13:29:36.021875 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 2 13:29:36.022047 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 2 13:29:36.022197 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:29:36.022342 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:29:36.022486 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:29:36.022634 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 2 13:29:36.022807 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:29:36.022977 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 2 13:29:36.023160 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 2 13:29:36.023320 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 2 13:29:36.023484 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 2 13:29:36.023642 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 2 13:29:36.023860 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 2 13:29:36.024027 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 2 13:29:36.024185 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 2 13:29:36.024364 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 2 13:29:36.024515 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 2 13:29:36.024664 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 2 13:29:36.024861 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 2 13:29:36.025037 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 2 13:29:36.025186 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 2 13:29:36.025362 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 2 13:29:36.025512 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 2 13:29:36.025660 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 2 13:29:36.025858 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 2 13:29:36.026031 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 2 13:29:36.026180 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 2 13:29:36.026363 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 2 13:29:36.026523 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 2 13:29:36.026671 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 2 13:29:36.026864 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 2 13:29:36.027033 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 2 13:29:36.027189 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 2 13:29:36.027211 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:29:36.027226 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:29:36.027246 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 2 13:29:36.027260 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 2 13:29:36.027273 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 2 13:29:36.027287 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Mar 2 13:29:36.027301 kernel: Initialise system trusted keyrings Mar 2 13:29:36.027315 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 2 13:29:36.027328 kernel: Key type asymmetric registered Mar 2 13:29:36.027341 kernel: Asymmetric key parser 'x509' registered Mar 2 13:29:36.027358 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:29:36.027372 kernel: io scheduler mq-deadline registered Mar 2 13:29:36.027389 kernel: io scheduler kyber registered Mar 2 13:29:36.027403 kernel: io scheduler bfq registered Mar 2 13:29:36.027558 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 2 13:29:36.027754 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 2 13:29:36.027930 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.028088 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 2 13:29:36.028253 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 2 13:29:36.028410 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.028567 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 2 13:29:36.028740 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 2 13:29:36.028911 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.029070 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 2 13:29:36.029234 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 2 13:29:36.029410 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.029568 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 2 13:29:36.029753 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 2 13:29:36.029932 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.030091 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 2 13:29:36.030256 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 2 13:29:36.030414 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.030571 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 2 13:29:36.030746 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 2 13:29:36.030917 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.031076 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 2 13:29:36.031241 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 2 13:29:36.031398 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 2 13:29:36.031419 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:29:36.031433 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:29:36.031447 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:29:36.031461 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:29:36.031474 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:29:36.031495 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:29:36.031508 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:29:36.031530 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:29:36.031544 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:29:36.031739 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 2 13:29:36.031920 kernel: rtc_cmos 00:03: registered as rtc0 Mar 2 13:29:36.032070 kernel: rtc_cmos 00:03: setting system clock to 2026-03-02T13:29:35 UTC (1772458175) Mar 2 13:29:36.032218 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 2 13:29:36.032245 kernel: intel_pstate: CPU model not supported Mar 2 13:29:36.032265 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:29:36.032278 kernel: Segment Routing with IPv6 Mar 2 13:29:36.032292 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:29:36.032305 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:29:36.032318 kernel: Key type dns_resolver registered Mar 2 13:29:36.032331 kernel: IPI shorthand broadcast: enabled Mar 2 13:29:36.032345 kernel: sched_clock: Marking stable (3481004079, 229665672)->(3837027521, -126357770) Mar 2 13:29:36.032358 kernel: registered taskstats version 1 Mar 2 13:29:36.032376 kernel: Loading compiled-in X.509 certificates Mar 2 13:29:36.032390 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: ca052fea375a75b056ebd4154b64794dffb70b96' Mar 2 13:29:36.032403 kernel: Demotion targets for Node 0: null Mar 2 13:29:36.032417 kernel: Key type .fscrypt registered Mar 2 13:29:36.032430 kernel: Key type fscrypt-provisioning registered Mar 2 13:29:36.032443 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:29:36.032456 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:29:36.032469 kernel: ima: No architecture policies found Mar 2 13:29:36.032482 kernel: clk: Disabling unused clocks Mar 2 13:29:36.032496 kernel: Warning: unable to open an initial console. Mar 2 13:29:36.032526 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 2 13:29:36.032539 kernel: Write protecting the kernel read-only data: 40960k Mar 2 13:29:36.032552 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 2 13:29:36.032577 kernel: Run /init as init process Mar 2 13:29:36.032590 kernel: with arguments: Mar 2 13:29:36.032610 kernel: /init Mar 2 13:29:36.032623 kernel: with environment: Mar 2 13:29:36.032648 kernel: HOME=/ Mar 2 13:29:36.032660 kernel: TERM=linux Mar 2 13:29:36.032690 systemd[1]: Successfully made /usr/ read-only. Mar 2 13:29:36.032737 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 13:29:36.032753 systemd[1]: Detected virtualization kvm. Mar 2 13:29:36.032766 systemd[1]: Detected architecture x86-64. Mar 2 13:29:36.032780 systemd[1]: Running in initrd. Mar 2 13:29:36.032802 systemd[1]: No hostname configured, using default hostname. Mar 2 13:29:36.032816 systemd[1]: Hostname set to . Mar 2 13:29:36.032837 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:29:36.032851 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:29:36.032865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:29:36.032889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:29:36.032907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:29:36.032921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:29:36.032935 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:29:36.032956 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:29:36.032972 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:29:36.032987 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:29:36.033001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:29:36.033015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:29:36.033029 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:29:36.033043 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:29:36.033057 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:29:36.033076 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:29:36.033090 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:29:36.033105 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:29:36.033119 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:29:36.033133 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 2 13:29:36.033147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:29:36.033162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:29:36.033176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:29:36.033195 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:29:36.033209 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:29:36.033223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:29:36.033237 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:29:36.033252 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 2 13:29:36.033266 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:29:36.033280 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:29:36.033294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:29:36.033308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:36.033327 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:29:36.033342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:29:36.033357 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:29:36.033371 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:29:36.033453 systemd-journald[210]: Collecting audit messages is disabled. Mar 2 13:29:36.033500 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:29:36.033515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:29:36.033529 kernel: Bridge firewalling registered Mar 2 13:29:36.033550 systemd-journald[210]: Journal started Mar 2 13:29:36.033582 systemd-journald[210]: Runtime Journal (/run/log/journal/3163e32e5d5a414c84e9e3817344fa3d) is 4.7M, max 37.8M, 33.1M free. Mar 2 13:29:35.970768 systemd-modules-load[212]: Inserted module 'overlay' Mar 2 13:29:36.064386 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:29:36.003529 systemd-modules-load[212]: Inserted module 'br_netfilter' Mar 2 13:29:36.066778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:29:36.067870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:36.073174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:36.075841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:29:36.080838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:29:36.087871 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:29:36.102252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:29:36.105857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:29:36.115414 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 2 13:29:36.119904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:36.122851 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:29:36.124717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:29:36.136908 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:29:36.151585 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 13:29:36.192861 systemd-resolved[251]: Positive Trust Anchors: Mar 2 13:29:36.194006 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:29:36.194055 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:29:36.202315 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 2 13:29:36.205498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:29:36.206562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:29:36.265776 kernel: SCSI subsystem initialized Mar 2 13:29:36.277741 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:29:36.291763 kernel: iscsi: registered transport (tcp) Mar 2 13:29:36.317776 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:29:36.317891 kernel: QLogic iSCSI HBA Driver Mar 2 13:29:36.344981 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:29:36.363925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:29:36.367966 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:29:36.431452 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:29:36.435406 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:29:36.497758 kernel: raid6: sse2x4 gen() 13006 MB/s Mar 2 13:29:36.515807 kernel: raid6: sse2x2 gen() 8857 MB/s Mar 2 13:29:36.534497 kernel: raid6: sse2x1 gen() 9805 MB/s Mar 2 13:29:36.534586 kernel: raid6: using algorithm sse2x4 gen() 13006 MB/s Mar 2 13:29:36.553569 kernel: raid6: .... xor() 7508 MB/s, rmw enabled Mar 2 13:29:36.553677 kernel: raid6: using ssse3x2 recovery algorithm Mar 2 13:29:36.579741 kernel: xor: automatically using best checksumming function avx Mar 2 13:29:36.773756 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:29:36.782753 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:29:36.785903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:29:36.819414 systemd-udevd[460]: Using default interface naming scheme 'v255'. Mar 2 13:29:36.829253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:29:36.832993 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:29:36.860870 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Mar 2 13:29:36.895497 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:29:36.898263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:29:37.028586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:29:37.031894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:29:37.172737 kernel: ACPI: bus type USB registered Mar 2 13:29:37.177769 kernel: usbcore: registered new interface driver usbfs Mar 2 13:29:37.177855 kernel: usbcore: registered new interface driver hub Mar 2 13:29:37.183739 kernel: usbcore: registered new device driver usb Mar 2 13:29:37.198796 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 2 13:29:37.199126 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:29:37.212029 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 2 13:29:37.226728 kernel: AES CTR mode by8 optimization enabled Mar 2 13:29:37.233835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:29:37.233885 kernel: GPT:17805311 != 125829119 Mar 2 13:29:37.235010 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:29:37.237214 kernel: GPT:17805311 != 125829119 Mar 2 13:29:37.237249 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:29:37.239080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:29:37.253543 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 2 13:29:37.255061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:29:37.255243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:37.257206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:37.266641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:37.268922 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:29:37.307089 kernel: libata version 3.00 loaded. Mar 2 13:29:37.323759 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 2 13:29:37.329959 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 2 13:29:37.330238 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:29:37.330440 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:29:37.330462 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 2 13:29:37.343790 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 2 13:29:37.344247 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 2 13:29:37.344447 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:29:37.358771 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 2 13:29:37.359080 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 2 13:29:37.359339 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 2 13:29:37.360735 kernel: hub 1-0:1.0: USB hub found Mar 2 13:29:37.361002 kernel: hub 1-0:1.0: 4 ports detected Mar 2 13:29:37.366733 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 2 13:29:37.366995 kernel: hub 2-0:1.0: USB hub found Mar 2 13:29:37.367207 kernel: hub 2-0:1.0: 4 ports detected Mar 2 13:29:37.377565 kernel: scsi host0: ahci Mar 2 13:29:37.384754 kernel: scsi host1: ahci Mar 2 13:29:37.399148 kernel: scsi host2: ahci Mar 2 13:29:37.399397 kernel: scsi host3: ahci Mar 2 13:29:37.400003 kernel: scsi host4: ahci Mar 2 13:29:37.401203 kernel: scsi host5: ahci Mar 2 13:29:37.401412 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Mar 2 13:29:37.401434 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Mar 2 13:29:37.401451 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Mar 2 13:29:37.401468 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Mar 2 13:29:37.401485 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Mar 2 13:29:37.401501 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Mar 2 13:29:37.403543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:29:37.453436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:37.476617 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:29:37.497371 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:29:37.498284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:29:37.512333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:29:37.514643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:29:37.551612 disk-uuid[613]: Primary Header is updated. Mar 2 13:29:37.551612 disk-uuid[613]: Secondary Entries is updated. Mar 2 13:29:37.551612 disk-uuid[613]: Secondary Header is updated. Mar 2 13:29:37.557745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:29:37.565728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:29:37.607753 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 2 13:29:37.716329 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:29:37.716411 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:29:37.716432 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:29:37.716450 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:29:37.719963 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:29:37.720000 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 2 13:29:37.750411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:29:37.752768 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:29:37.760014 kernel: usbcore: registered new interface driver usbhid Mar 2 13:29:37.760055 kernel: usbhid: USB HID core driver Mar 2 13:29:37.772075 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Mar 2 13:29:37.772129 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 2 13:29:37.779119 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:29:37.779991 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:29:37.781680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:29:37.784466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:29:37.809294 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:29:38.569726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:29:38.570481 disk-uuid[614]: The operation has completed successfully. Mar 2 13:29:38.642423 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:29:38.642602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:29:38.687190 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:29:38.715362 sh[640]: Success Mar 2 13:29:38.738931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:29:38.739019 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:29:38.742141 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 2 13:29:38.757726 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Mar 2 13:29:38.820417 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:29:38.829763 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:29:38.834857 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:29:38.863744 kernel: BTRFS: device fsid 760529e6-8e55-47fc-ad5a-c1c1d184e50a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (652) Mar 2 13:29:38.867783 kernel: BTRFS info (device dm-0): first mount of filesystem 760529e6-8e55-47fc-ad5a-c1c1d184e50a Mar 2 13:29:38.870733 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:29:38.881197 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 2 13:29:38.881320 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 2 13:29:38.884223 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:29:38.886461 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 2 13:29:38.888338 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:29:38.891988 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:29:38.895887 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:29:38.940111 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (683) Mar 2 13:29:38.940213 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:29:38.942321 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:29:38.949831 kernel: BTRFS info (device vda6): turning on async discard Mar 2 13:29:38.949945 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 13:29:38.958739 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:29:38.960861 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:29:38.963918 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:29:39.070825 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:29:39.076931 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:29:39.145986 systemd-networkd[822]: lo: Link UP Mar 2 13:29:39.146003 systemd-networkd[822]: lo: Gained carrier Mar 2 13:29:39.153361 systemd-networkd[822]: Enumeration completed Mar 2 13:29:39.154007 systemd-networkd[822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:29:39.154015 systemd-networkd[822]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:29:39.155670 systemd-networkd[822]: eth0: Link UP Mar 2 13:29:39.156529 systemd-networkd[822]: eth0: Gained carrier Mar 2 13:29:39.156544 systemd-networkd[822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:29:39.157324 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:29:39.162561 systemd[1]: Reached target network.target - Network. Mar 2 13:29:39.175828 systemd-networkd[822]: eth0: DHCPv4 address 10.230.30.118/30, gateway 10.230.30.117 acquired from 10.230.30.117 Mar 2 13:29:39.199059 ignition[740]: Ignition 2.22.0 Mar 2 13:29:39.199085 ignition[740]: Stage: fetch-offline Mar 2 13:29:39.199188 ignition[740]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:39.199207 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:39.199375 ignition[740]: parsed url from cmdline: "" Mar 2 13:29:39.203571 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:29:39.199382 ignition[740]: no config URL provided Mar 2 13:29:39.199399 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:29:39.199416 ignition[740]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:29:39.199430 ignition[740]: failed to fetch config: resource requires networking Mar 2 13:29:39.207909 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:29:39.199951 ignition[740]: Ignition finished successfully Mar 2 13:29:39.253963 ignition[832]: Ignition 2.22.0 Mar 2 13:29:39.253988 ignition[832]: Stage: fetch Mar 2 13:29:39.254185 ignition[832]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:39.254204 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:39.254347 ignition[832]: parsed url from cmdline: "" Mar 2 13:29:39.254355 ignition[832]: no config URL provided Mar 2 13:29:39.254365 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:29:39.254382 ignition[832]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:29:39.254579 ignition[832]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 2 13:29:39.255949 ignition[832]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 2 13:29:39.256028 ignition[832]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 2 13:29:39.270574 ignition[832]: GET result: OK Mar 2 13:29:39.271687 ignition[832]: parsing config with SHA512: 61eab46a5e27e0fa2699c203edf2127f9d7229dd7f71f7cd0ded3cd170cc5123230912134436b3995d878b7746a19356589a0de083737bc2363c021376ea139c Mar 2 13:29:39.278598 unknown[832]: fetched base config from "system" Mar 2 13:29:39.279260 ignition[832]: fetch: fetch complete Mar 2 13:29:39.278627 unknown[832]: fetched base config from "system" Mar 2 13:29:39.279274 ignition[832]: fetch: fetch passed Mar 2 13:29:39.278637 unknown[832]: fetched user config from "openstack" Mar 2 13:29:39.279347 ignition[832]: Ignition finished successfully Mar 2 13:29:39.285552 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:29:39.287958 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:29:39.331469 ignition[838]: Ignition 2.22.0 Mar 2 13:29:39.331496 ignition[838]: Stage: kargs Mar 2 13:29:39.331743 ignition[838]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:39.331763 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:39.334991 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:29:39.332905 ignition[838]: kargs: kargs passed Mar 2 13:29:39.332983 ignition[838]: Ignition finished successfully Mar 2 13:29:39.338986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:29:39.379556 ignition[844]: Ignition 2.22.0 Mar 2 13:29:39.380906 ignition[844]: Stage: disks Mar 2 13:29:39.381758 ignition[844]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:39.381792 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:39.384628 ignition[844]: disks: disks passed Mar 2 13:29:39.385364 ignition[844]: Ignition finished successfully Mar 2 13:29:39.387106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:29:39.388725 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:29:39.390424 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:29:39.391255 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:29:39.392863 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:29:39.394281 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:29:39.397896 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:29:39.432613 systemd-fsck[852]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 2 13:29:39.437187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:29:39.441689 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:29:39.584734 kernel: EXT4-fs (vda9): mounted filesystem 9d55f1a4-66ad-43d6-b325-f6b8d2d08c3e r/w with ordered data mode. Quota mode: none. Mar 2 13:29:39.586455 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:29:39.588068 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:29:39.591650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:29:39.593835 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:29:39.595601 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:29:39.598593 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 2 13:29:39.600180 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:29:39.600224 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:29:39.612552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:29:39.617115 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:29:39.634731 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Mar 2 13:29:39.642210 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:29:39.642251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:29:39.661464 kernel: BTRFS info (device vda6): turning on async discard Mar 2 13:29:39.661573 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 13:29:39.679930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:29:39.704756 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:39.712191 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:29:39.721993 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:29:39.733658 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:29:39.740492 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:29:39.859427 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:29:39.864774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:29:39.867907 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:29:39.885100 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:29:39.888624 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:29:39.909988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:29:39.937630 ignition[977]: INFO : Ignition 2.22.0 Mar 2 13:29:39.937630 ignition[977]: INFO : Stage: mount Mar 2 13:29:39.939470 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:39.939470 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:39.939470 ignition[977]: INFO : mount: mount passed Mar 2 13:29:39.939470 ignition[977]: INFO : Ignition finished successfully Mar 2 13:29:39.942245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:29:40.740794 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:40.830070 systemd-networkd[822]: eth0: Gained IPv6LL Mar 2 13:29:42.338808 systemd-networkd[822]: eth0: Ignoring DHCPv6 address 2a02:1348:179:879d:24:19ff:fee6:1e76/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:879d:24:19ff:fee6:1e76/64 assigned by NDisc. Mar 2 13:29:42.338821 systemd-networkd[822]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 2 13:29:42.752744 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:46.759746 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:46.766869 coreos-metadata[862]: Mar 02 13:29:46.766 WARN failed to locate config-drive, using the metadata service API instead Mar 2 13:29:46.792989 coreos-metadata[862]: Mar 02 13:29:46.792 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 2 13:29:46.810982 coreos-metadata[862]: Mar 02 13:29:46.810 INFO Fetch successful Mar 2 13:29:46.811882 coreos-metadata[862]: Mar 02 13:29:46.811 INFO wrote hostname srv-u4d8l.gb1.brightbox.com to /sysroot/etc/hostname Mar 2 13:29:46.814149 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 2 13:29:46.814364 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 2 13:29:46.818821 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:29:46.838650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:29:46.858749 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Mar 2 13:29:46.864444 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:29:46.864494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:29:46.870395 kernel: BTRFS info (device vda6): turning on async discard Mar 2 13:29:46.870433 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 13:29:46.873922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:29:46.918001 ignition[1012]: INFO : Ignition 2.22.0 Mar 2 13:29:46.918001 ignition[1012]: INFO : Stage: files Mar 2 13:29:46.919998 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:46.919998 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:46.919998 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:29:46.922623 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:29:46.922623 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:29:46.932442 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:29:46.932442 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:29:46.932442 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:29:46.932442 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:29:46.932442 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:29:46.924189 unknown[1012]: wrote ssh authorized keys file for user: core Mar 2 13:29:47.145084 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:29:47.445890 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:29:47.447417 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:29:47.447417 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 13:29:47.867248 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:29:48.428150 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:29:48.430353 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:29:48.439841 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:29:48.439841 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:29:48.439841 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:29:48.439841 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:29:48.439841 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:29:48.439841 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 2 13:29:48.718513 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:29:49.959560 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:29:49.959560 ignition[1012]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 13:29:49.963696 ignition[1012]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:29:49.965597 ignition[1012]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:29:49.965597 ignition[1012]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 13:29:49.965597 ignition[1012]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:29:49.969012 ignition[1012]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:29:49.969012 ignition[1012]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:29:49.969012 ignition[1012]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:29:49.969012 ignition[1012]: INFO : files: files passed Mar 2 13:29:49.969012 ignition[1012]: INFO : Ignition finished successfully Mar 2 13:29:49.968990 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:29:49.974916 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:29:49.979973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:29:50.107596 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:29:50.107596 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:29:50.106579 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:29:50.106809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:29:50.114843 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:29:50.116427 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:29:50.119108 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:29:50.121939 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:29:50.188330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:29:50.188548 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:29:50.190388 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:29:50.191820 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:29:50.193451 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:29:50.195871 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:29:50.239616 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:29:50.243878 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:29:50.265873 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:29:50.267809 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:29:50.269744 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:29:50.270569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:29:50.270793 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:29:50.272655 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:29:50.273575 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:29:50.275193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:29:50.276681 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:29:50.278225 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:29:50.279841 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 2 13:29:50.281484 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:29:50.283074 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:29:50.284899 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:29:50.286380 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:29:50.287908 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:29:50.289468 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:29:50.289820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:29:50.291368 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:29:50.292386 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:29:50.293992 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:29:50.294206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:29:50.301480 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:29:50.301774 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:29:50.303639 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:29:50.303919 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:29:50.305503 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:29:50.305665 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:29:50.308972 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:29:50.312843 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:29:50.313551 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:29:50.314913 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:29:50.316826 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:29:50.317064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:29:50.327400 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:29:50.328421 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:29:50.350086 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:29:50.356276 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:29:50.356488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:29:50.359403 ignition[1066]: INFO : Ignition 2.22.0 Mar 2 13:29:50.359403 ignition[1066]: INFO : Stage: umount Mar 2 13:29:50.359403 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:29:50.359403 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 2 13:29:50.366312 ignition[1066]: INFO : umount: umount passed Mar 2 13:29:50.366312 ignition[1066]: INFO : Ignition finished successfully Mar 2 13:29:50.362007 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:29:50.363192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:29:50.364466 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:29:50.364561 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:29:50.365427 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:29:50.365497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:29:50.367026 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:29:50.367094 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:29:50.368395 systemd[1]: Stopped target network.target - Network. Mar 2 13:29:50.369805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:29:50.369886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:29:50.371354 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:29:50.372773 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:29:50.372878 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:29:50.374220 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:29:50.375567 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:29:50.377033 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:29:50.377120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:29:50.378404 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:29:50.378501 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:29:50.379800 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:29:50.379889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:29:50.381100 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:29:50.381166 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:29:50.382449 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:29:50.382527 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:29:50.384204 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:29:50.386507 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:29:50.387813 systemd-networkd[822]: eth0: DHCPv6 lease lost Mar 2 13:29:50.391360 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:29:50.391590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:29:50.396811 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 2 13:29:50.397146 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:29:50.397343 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:29:50.399908 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 2 13:29:50.401353 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 2 13:29:50.402840 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:29:50.402924 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:29:50.405469 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:29:50.407812 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:29:50.407886 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:29:50.410126 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:29:50.410202 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:29:50.414930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:29:50.415002 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:29:50.416465 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:29:50.416531 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:29:50.419779 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:29:50.424282 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 13:29:50.424377 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:29:50.429210 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:29:50.429506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:29:50.432182 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:29:50.432373 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:29:50.433988 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:29:50.434043 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:29:50.436863 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:29:50.436938 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:29:50.439180 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:29:50.439247 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:29:50.440573 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:29:50.440656 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:50.443110 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:29:50.445579 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 2 13:29:50.445667 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:29:50.447750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:29:50.447848 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:29:50.449692 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:29:50.450811 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:29:50.452204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:29:50.452274 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:29:50.455865 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:29:50.455945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:50.461548 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 2 13:29:50.461633 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 2 13:29:50.461722 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 2 13:29:50.461801 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:29:50.462432 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:29:50.462573 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:29:50.471489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:29:50.471624 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:29:50.473155 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:29:50.475530 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:29:50.497241 systemd[1]: Switching root. Mar 2 13:29:50.536242 systemd-journald[210]: Journal stopped Mar 2 13:29:52.114846 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Mar 2 13:29:52.115027 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:29:52.115069 kernel: SELinux: policy capability open_perms=1 Mar 2 13:29:52.115097 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:29:52.115134 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:29:52.115162 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:29:52.115194 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:29:52.115220 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:29:52.115245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:29:52.115275 kernel: SELinux: policy capability userspace_initial_context=0 Mar 2 13:29:52.115301 kernel: audit: type=1403 audit(1772458190.819:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:29:52.115330 systemd[1]: Successfully loaded SELinux policy in 78.124ms. Mar 2 13:29:52.115385 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.248ms. Mar 2 13:29:52.115418 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 13:29:52.115448 systemd[1]: Detected virtualization kvm. Mar 2 13:29:52.115476 systemd[1]: Detected architecture x86-64. Mar 2 13:29:52.115503 systemd[1]: Detected first boot. Mar 2 13:29:52.115536 systemd[1]: Hostname set to . Mar 2 13:29:52.115565 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:29:52.115586 zram_generator::config[1110]: No configuration found. Mar 2 13:29:52.115620 kernel: Guest personality initialized and is inactive Mar 2 13:29:52.115641 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 2 13:29:52.115660 kernel: Initialized host personality Mar 2 13:29:52.115678 kernel: NET: Registered PF_VSOCK protocol family Mar 2 13:29:52.117724 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:29:52.117772 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 2 13:29:52.117816 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:29:52.117848 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:29:52.117870 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:29:52.117899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:29:52.117942 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:29:52.117981 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:29:52.118004 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:29:52.118025 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:29:52.118045 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:29:52.118066 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:29:52.118086 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:29:52.118106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:29:52.118127 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:29:52.118147 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:29:52.118184 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:29:52.118207 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:29:52.118242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:29:52.118264 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:29:52.118292 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:29:52.118319 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:29:52.118360 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:29:52.118384 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:29:52.118406 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:29:52.118427 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:29:52.118448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:29:52.118475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:29:52.118497 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:29:52.118518 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:29:52.118546 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:29:52.118579 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:29:52.118618 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 2 13:29:52.118646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:29:52.118668 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:29:52.118688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:29:52.118725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:29:52.118748 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:29:52.118769 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:29:52.118797 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:29:52.118830 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:52.118859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:29:52.118881 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:29:52.118901 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:29:52.118929 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:29:52.118951 systemd[1]: Reached target machines.target - Containers. Mar 2 13:29:52.118980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:29:52.119008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:29:52.119040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:29:52.119069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:29:52.119091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:29:52.119112 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:29:52.119133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:29:52.119153 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:29:52.119174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:29:52.119203 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:29:52.119224 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:29:52.119260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:29:52.119282 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:29:52.119302 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:29:52.119330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 13:29:52.119360 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:29:52.119402 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:29:52.119431 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:29:52.119461 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:29:52.119482 kernel: loop: module loaded Mar 2 13:29:52.119517 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 2 13:29:52.119540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:29:52.119570 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:29:52.119592 systemd[1]: Stopped verity-setup.service. Mar 2 13:29:52.119615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:52.119636 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:29:52.119657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:29:52.119678 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:29:52.121735 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:29:52.121782 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:29:52.121806 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:29:52.121827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:29:52.121848 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:29:52.121877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:29:52.121916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:29:52.121938 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:29:52.121959 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:29:52.121994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:29:52.122017 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:29:52.122038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:29:52.122058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:29:52.122089 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:29:52.122116 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:29:52.122144 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:29:52.122165 kernel: fuse: init (API version 7.41) Mar 2 13:29:52.122233 systemd-journald[1193]: Collecting audit messages is disabled. Mar 2 13:29:52.122299 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:29:52.122324 systemd-journald[1193]: Journal started Mar 2 13:29:52.122385 systemd-journald[1193]: Runtime Journal (/run/log/journal/3163e32e5d5a414c84e9e3817344fa3d) is 4.7M, max 37.8M, 33.1M free. Mar 2 13:29:51.672534 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:29:51.695144 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:29:51.695973 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:29:52.125861 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:29:52.130803 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:29:52.136763 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 2 13:29:52.145727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:29:52.145818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:29:52.153735 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:29:52.159722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:29:52.164736 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:29:52.170756 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:29:52.174727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:29:52.185766 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:29:52.198998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:29:52.199093 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:29:52.203236 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:29:52.204790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:29:52.207359 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 2 13:29:52.210431 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:29:52.248026 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:29:52.307584 kernel: loop0: detected capacity change from 0 to 110984 Mar 2 13:29:52.312203 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:29:52.313856 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:29:52.328862 kernel: ACPI: bus type drm_connector registered Mar 2 13:29:52.327471 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 2 13:29:52.334978 systemd-journald[1193]: Time spent on flushing to /var/log/journal/3163e32e5d5a414c84e9e3817344fa3d is 116.200ms for 1169 entries. Mar 2 13:29:52.334978 systemd-journald[1193]: System Journal (/var/log/journal/3163e32e5d5a414c84e9e3817344fa3d) is 8M, max 584.8M, 576.8M free. Mar 2 13:29:52.470047 systemd-journald[1193]: Received client request to flush runtime journal. Mar 2 13:29:52.470310 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:29:52.470355 kernel: loop1: detected capacity change from 0 to 8 Mar 2 13:29:52.470391 kernel: loop2: detected capacity change from 0 to 128560 Mar 2 13:29:52.345149 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:29:52.345483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:29:52.350142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:29:52.377675 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:29:52.442126 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 2 13:29:52.444263 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Mar 2 13:29:52.444285 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Mar 2 13:29:52.467197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:29:52.475946 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:29:52.478363 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:29:52.498594 kernel: loop3: detected capacity change from 0 to 228704 Mar 2 13:29:52.554257 kernel: loop4: detected capacity change from 0 to 110984 Mar 2 13:29:52.578412 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:29:52.588323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:29:52.596612 kernel: loop5: detected capacity change from 0 to 8 Mar 2 13:29:52.605774 kernel: loop6: detected capacity change from 0 to 128560 Mar 2 13:29:52.653738 kernel: loop7: detected capacity change from 0 to 228704 Mar 2 13:29:52.663131 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Mar 2 13:29:52.663161 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Mar 2 13:29:52.672368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:29:52.677322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:29:52.690008 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 2 13:29:52.691000 (sd-merge)[1271]: Merged extensions into '/usr'. Mar 2 13:29:52.698504 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:29:52.705856 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:29:52.707198 systemd[1]: Reloading... Mar 2 13:29:52.961995 zram_generator::config[1302]: No configuration found. Mar 2 13:29:53.095344 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:29:53.255783 systemd[1]: Reloading finished in 546 ms. Mar 2 13:29:53.283669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:29:53.288634 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:29:53.293632 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:29:53.305907 systemd[1]: Starting ensure-sysext.service... Mar 2 13:29:53.311879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:29:53.332578 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:29:53.345801 systemd[1]: Reload requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:29:53.345826 systemd[1]: Reloading... Mar 2 13:29:53.361669 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 13:29:53.363140 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 13:29:53.365172 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:29:53.365622 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:29:53.369998 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:29:53.370632 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Mar 2 13:29:53.370927 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Mar 2 13:29:53.378891 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:29:53.378909 systemd-tmpfiles[1360]: Skipping /boot Mar 2 13:29:53.402676 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:29:53.402696 systemd-tmpfiles[1360]: Skipping /boot Mar 2 13:29:53.451839 zram_generator::config[1384]: No configuration found. Mar 2 13:29:53.729665 systemd[1]: Reloading finished in 383 ms. Mar 2 13:29:53.756544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:29:53.769396 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:29:53.780583 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 13:29:53.784000 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:29:53.788819 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:29:53.799046 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:29:53.805092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:29:53.814188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:29:53.822735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:53.823055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:29:53.826793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:29:53.842893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:29:53.851804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:29:53.852685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:29:53.852918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 13:29:53.853077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:53.865513 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:29:53.873363 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:53.873646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:29:53.873897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:29:53.874034 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 13:29:53.881889 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:29:53.883781 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:53.884983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:29:53.885322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:29:53.899099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:29:53.904072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:53.904528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:29:53.909380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:29:53.918308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:29:53.920042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:29:53.920235 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 13:29:53.920474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:29:53.920609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:29:53.922374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:29:53.928001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:29:53.936472 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:29:53.937951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:29:53.939910 systemd[1]: Finished ensure-sysext.service. Mar 2 13:29:53.947226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:29:53.953805 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:29:53.956199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:29:53.956554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:29:53.959338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:29:53.968810 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:29:53.977009 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:29:53.987504 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:29:53.987927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:29:53.990229 augenrules[1488]: No rules Mar 2 13:29:53.991926 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:29:53.993958 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 13:29:54.005157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:29:54.007472 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:29:54.017054 systemd-udevd[1451]: Using default interface naming scheme 'v255'. Mar 2 13:29:54.081875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:29:54.090900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:29:54.134968 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:29:54.140111 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:29:54.144211 systemd-resolved[1449]: Positive Trust Anchors: Mar 2 13:29:54.144231 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:29:54.144277 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:29:54.151181 systemd-resolved[1449]: Using system hostname 'srv-u4d8l.gb1.brightbox.com'. Mar 2 13:29:54.154612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:29:54.157421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:29:54.158870 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:29:54.160472 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:29:54.161755 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:29:54.163200 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 2 13:29:54.164922 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:29:54.166267 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:29:54.167687 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:29:54.168678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:29:54.168762 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:29:54.169881 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:29:54.174982 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:29:54.179850 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:29:54.186909 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 2 13:29:54.189323 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 2 13:29:54.190612 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 2 13:29:54.201532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:29:54.203426 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 2 13:29:54.208110 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:29:54.227665 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:29:54.229734 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:29:54.231218 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:29:54.231312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:29:54.234938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 2 13:29:54.239538 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:29:54.242975 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:29:54.252955 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:29:54.262020 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:29:54.262789 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:29:54.265977 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 2 13:29:54.270958 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:29:54.277755 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:54.283023 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:29:54.289613 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:29:54.293986 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Mar 2 13:29:54.294348 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Mar 2 13:29:54.300194 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Mar 2 13:29:54.300527 oslogin_cache_refresh[1535]: Failure getting users, quitting Mar 2 13:29:54.300646 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 13:29:54.300724 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 13:29:54.300878 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Mar 2 13:29:54.300935 oslogin_cache_refresh[1535]: Refreshing group entry cache Mar 2 13:29:54.301980 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Mar 2 13:29:54.302265 oslogin_cache_refresh[1535]: Failure getting groups, quitting Mar 2 13:29:54.302393 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 13:29:54.302450 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 13:29:54.303995 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:29:54.314573 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:29:54.317910 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:29:54.319581 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:29:54.320984 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:29:54.326266 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:29:54.330153 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:29:54.331963 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 2 13:29:54.332789 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 2 13:29:54.344729 extend-filesystems[1534]: Found /dev/vda6 Mar 2 13:29:54.356298 jq[1533]: false Mar 2 13:29:54.358457 jq[1547]: true Mar 2 13:29:54.360720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:29:54.361793 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:29:54.380100 extend-filesystems[1534]: Found /dev/vda9 Mar 2 13:29:54.379188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:29:54.383423 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:29:54.396584 extend-filesystems[1534]: Checking size of /dev/vda9 Mar 2 13:29:54.403669 tar[1555]: linux-amd64/LICENSE Mar 2 13:29:54.403669 tar[1555]: linux-amd64/helm Mar 2 13:29:54.444151 jq[1552]: true Mar 2 13:29:54.449570 extend-filesystems[1534]: Resized partition /dev/vda9 Mar 2 13:29:54.455722 extend-filesystems[1573]: resize2fs 1.47.3 (8-Jul-2025) Mar 2 13:29:54.472092 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 2 13:29:54.472191 update_engine[1546]: I20260302 13:29:54.462663 1546 main.cc:92] Flatcar Update Engine starting Mar 2 13:29:54.475764 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:29:54.476295 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:29:54.496398 dbus-daemon[1531]: [system] SELinux support is enabled Mar 2 13:29:54.496751 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:29:54.509362 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:29:54.509418 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:29:54.513167 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:29:54.513207 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:29:54.539406 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:29:54.539875 update_engine[1546]: I20260302 13:29:54.539745 1546 update_check_scheduler.cc:74] Next update check in 6m41s Mar 2 13:29:54.545412 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:29:54.549300 systemd-networkd[1503]: lo: Link UP Mar 2 13:29:54.549307 systemd-networkd[1503]: lo: Gained carrier Mar 2 13:29:54.550662 systemd-networkd[1503]: Enumeration completed Mar 2 13:29:54.550820 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:29:54.552613 systemd[1]: Reached target network.target - Network. Mar 2 13:29:54.556677 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:29:54.561058 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 2 13:29:54.565162 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:29:54.625457 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:29:54.627163 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:29:54.638649 systemd[1]: Starting sshkeys.service... Mar 2 13:29:54.714008 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:29:54.729480 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 2 13:29:54.742772 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 2 13:29:54.749453 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 2 13:29:54.779072 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:29:54.816729 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:54.822770 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 2 13:29:54.854110 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:29:54.854110 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 2 13:29:54.854110 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 2 13:29:54.873559 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Mar 2 13:29:54.855965 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:29:54.856366 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:29:54.867320 systemd-logind[1545]: New seat seat0. Mar 2 13:29:54.874432 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:29:54.892072 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:29:55.066042 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:29:55.066322 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:29:55.073912 systemd-networkd[1503]: eth0: Link UP Mar 2 13:29:55.074243 systemd-networkd[1503]: eth0: Gained carrier Mar 2 13:29:55.074281 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:29:55.097123 dbus-daemon[1531]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1503 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 2 13:29:55.097586 systemd-networkd[1503]: eth0: DHCPv4 address 10.230.30.118/30, gateway 10.230.30.117 acquired from 10.230.30.117 Mar 2 13:29:55.104516 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 2 13:29:55.105810 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. Mar 2 13:29:55.122085 containerd[1604]: time="2026-03-02T13:29:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 2 13:29:55.130722 containerd[1604]: time="2026-03-02T13:29:55.126362886Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 2 13:29:55.160934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:29:55.171803 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:29:55.224289 containerd[1604]: time="2026-03-02T13:29:55.224203821Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.589µs" Mar 2 13:29:55.226614 containerd[1604]: time="2026-03-02T13:29:55.226577043Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 2 13:29:55.234802 containerd[1604]: time="2026-03-02T13:29:55.234137741Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 2 13:29:55.234802 containerd[1604]: time="2026-03-02T13:29:55.234472554Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 2 13:29:55.234802 containerd[1604]: time="2026-03-02T13:29:55.234510235Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 2 13:29:55.234802 containerd[1604]: time="2026-03-02T13:29:55.234562024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 13:29:55.234802 containerd[1604]: time="2026-03-02T13:29:55.234673397Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.234695687Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.236513703Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.236542153Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.236562112Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.236577754Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.236721301Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.240452460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.240505949Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 13:29:55.240648 containerd[1604]: time="2026-03-02T13:29:55.240526986Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 2 13:29:55.245107 containerd[1604]: time="2026-03-02T13:29:55.245057439Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 2 13:29:55.253883 containerd[1604]: time="2026-03-02T13:29:55.249957532Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 2 13:29:55.253883 containerd[1604]: time="2026-03-02T13:29:55.250117253Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:29:55.276595 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:29:55.300780 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:29:55.311008 containerd[1604]: time="2026-03-02T13:29:55.308056593Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 2 13:29:55.315258 containerd[1604]: time="2026-03-02T13:29:55.313763910Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 2 13:29:55.315258 containerd[1604]: time="2026-03-02T13:29:55.313843879Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 2 13:29:55.315258 containerd[1604]: time="2026-03-02T13:29:55.313906276Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 2 13:29:55.315258 containerd[1604]: time="2026-03-02T13:29:55.313942270Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 2 13:29:55.315258 containerd[1604]: time="2026-03-02T13:29:55.314839505Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 2 13:29:55.315258 containerd[1604]: time="2026-03-02T13:29:55.314924725Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 2 13:29:55.315624 containerd[1604]: time="2026-03-02T13:29:55.315552544Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 2 13:29:55.320020 containerd[1604]: time="2026-03-02T13:29:55.316065470Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 2 13:29:55.320020 containerd[1604]: time="2026-03-02T13:29:55.316114912Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 2 13:29:55.320020 containerd[1604]: time="2026-03-02T13:29:55.316509614Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 2 13:29:55.320020 containerd[1604]: time="2026-03-02T13:29:55.317772768Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 2 13:29:55.320807 containerd[1604]: time="2026-03-02T13:29:55.320693987Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 2 13:29:55.321446 containerd[1604]: time="2026-03-02T13:29:55.320895082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 2 13:29:55.321446 containerd[1604]: time="2026-03-02T13:29:55.321367458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 2 13:29:55.321600 containerd[1604]: time="2026-03-02T13:29:55.321571333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 2 13:29:55.321779 containerd[1604]: time="2026-03-02T13:29:55.321749537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 2 13:29:55.323792 containerd[1604]: time="2026-03-02T13:29:55.322780477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 2 13:29:55.323792 containerd[1604]: time="2026-03-02T13:29:55.322827150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 2 13:29:55.323792 containerd[1604]: time="2026-03-02T13:29:55.322868064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 2 13:29:55.323792 containerd[1604]: time="2026-03-02T13:29:55.322904911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 2 13:29:55.323792 containerd[1604]: time="2026-03-02T13:29:55.322933431Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 2 13:29:55.323792 containerd[1604]: time="2026-03-02T13:29:55.322980533Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 2 13:29:55.332477 containerd[1604]: time="2026-03-02T13:29:55.330133788Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 2 13:29:55.332477 containerd[1604]: time="2026-03-02T13:29:55.330199086Z" level=info msg="Start snapshots syncer" Mar 2 13:29:55.332477 containerd[1604]: time="2026-03-02T13:29:55.330309434Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 2 13:29:55.332596 containerd[1604]: time="2026-03-02T13:29:55.331145114Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 2 13:29:55.332596 containerd[1604]: time="2026-03-02T13:29:55.331283062Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.339375830Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.340924556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.340981413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341026226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341071425Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341100124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341121209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341146323Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341204939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341232461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 2 13:29:55.343475 containerd[1604]: time="2026-03-02T13:29:55.341278787Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347203013Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347279184Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347300702Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347318334Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347332772Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347349527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347384859Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347440140Z" level=info msg="runtime interface created" Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347451788Z" level=info msg="created NRI interface" Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347465279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347494763Z" level=info msg="Connect containerd service" Mar 2 13:29:55.350711 containerd[1604]: time="2026-03-02T13:29:55.347541822Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:29:55.356888 containerd[1604]: time="2026-03-02T13:29:55.355749938Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:29:55.366946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 2 13:29:55.391069 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:29:55.553833 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 2 13:29:55.554634 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 2 13:29:55.558814 dbus-daemon[1531]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1621 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 2 13:29:55.571359 systemd[1]: Starting polkit.service - Authorization Manager... Mar 2 13:29:55.603161 containerd[1604]: time="2026-03-02T13:29:55.603076644Z" level=info msg="Start subscribing containerd event" Mar 2 13:29:55.603363 containerd[1604]: time="2026-03-02T13:29:55.603267854Z" level=info msg="Start recovering state" Mar 2 13:29:55.604641 containerd[1604]: time="2026-03-02T13:29:55.604589955Z" level=info msg="Start event monitor" Mar 2 13:29:55.604641 containerd[1604]: time="2026-03-02T13:29:55.604634684Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:29:55.604775 containerd[1604]: time="2026-03-02T13:29:55.604656098Z" level=info msg="Start streaming server" Mar 2 13:29:55.604775 containerd[1604]: time="2026-03-02T13:29:55.604686577Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 2 13:29:55.605594 containerd[1604]: time="2026-03-02T13:29:55.605562639Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:29:55.605684 containerd[1604]: time="2026-03-02T13:29:55.605650507Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:29:55.606132 containerd[1604]: time="2026-03-02T13:29:55.606024124Z" level=info msg="runtime interface starting up..." Mar 2 13:29:55.606132 containerd[1604]: time="2026-03-02T13:29:55.606060632Z" level=info msg="starting plugins..." Mar 2 13:29:55.607865 containerd[1604]: time="2026-03-02T13:29:55.607834510Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 2 13:29:55.608275 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:29:55.611215 containerd[1604]: time="2026-03-02T13:29:55.611157596Z" level=info msg="containerd successfully booted in 0.491069s" Mar 2 13:29:55.666745 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:29:55.673731 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:29:55.687032 polkitd[1641]: Started polkitd version 126 Mar 2 13:29:55.695068 polkitd[1641]: Loading rules from directory /etc/polkit-1/rules.d Mar 2 13:29:55.695730 polkitd[1641]: Loading rules from directory /run/polkit-1/rules.d Mar 2 13:29:55.695911 polkitd[1641]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 2 13:29:55.696388 polkitd[1641]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 2 13:29:55.696517 polkitd[1641]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 2 13:29:55.696716 polkitd[1641]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 2 13:29:55.697580 polkitd[1641]: Finished loading, compiling and executing 2 rules Mar 2 13:29:55.701915 systemd[1]: Started polkit.service - Authorization Manager. Mar 2 13:29:55.702549 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 2 13:29:55.703413 polkitd[1641]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 2 13:29:55.730790 systemd-hostnamed[1621]: Hostname set to (static) Mar 2 13:29:56.021332 tar[1555]: linux-amd64/README.md Mar 2 13:29:56.045569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:56.054309 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:29:56.060886 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:29:56.228729 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:29:56.326485 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:29:56.336671 systemd-logind[1545]: Watching system buttons on /dev/input/event3 (Power Button) Mar 2 13:29:56.431301 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:29:56.499656 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:29:56.500068 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:29:56.525188 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:29:56.590846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:29:56.625501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:56.631615 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:29:56.635999 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:29:56.637249 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:29:56.894215 systemd-networkd[1503]: eth0: Gained IPv6LL Mar 2 13:29:56.895369 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. Mar 2 13:29:56.899382 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:29:56.901874 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:29:56.905974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:29:56.908933 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:29:56.953359 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:29:57.498739 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:57.507438 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:57.936260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:29:57.956355 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:29:58.203272 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. Mar 2 13:29:58.205756 systemd-networkd[1503]: eth0: Ignoring DHCPv6 address 2a02:1348:179:879d:24:19ff:fee6:1e76/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:879d:24:19ff:fee6:1e76/64 assigned by NDisc. Mar 2 13:29:58.205767 systemd-networkd[1503]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 2 13:29:58.584988 kubelet[1706]: E0302 13:29:58.584796 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:29:58.588413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:29:58.588974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:29:58.590110 systemd[1]: kubelet.service: Consumed 1.124s CPU time, 266.6M memory peak. Mar 2 13:29:59.393996 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. Mar 2 13:29:59.528569 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:29:59.534750 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:30:01.726860 login[1686]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 2 13:30:01.733303 login[1685]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 2 13:30:01.751824 systemd-logind[1545]: New session 1 of user core. Mar 2 13:30:01.755355 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:30:01.757446 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:30:01.762905 systemd-logind[1545]: New session 2 of user core. Mar 2 13:30:01.792280 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:30:01.797952 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:30:01.816647 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:30:01.820729 systemd-logind[1545]: New session c1 of user core. Mar 2 13:30:02.024762 systemd[1722]: Queued start job for default target default.target. Mar 2 13:30:02.034659 systemd[1722]: Created slice app.slice - User Application Slice. Mar 2 13:30:02.034728 systemd[1722]: Reached target paths.target - Paths. Mar 2 13:30:02.034811 systemd[1722]: Reached target timers.target - Timers. Mar 2 13:30:02.037124 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:30:02.053605 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:30:02.054001 systemd[1722]: Reached target sockets.target - Sockets. Mar 2 13:30:02.054277 systemd[1722]: Reached target basic.target - Basic System. Mar 2 13:30:02.054528 systemd[1722]: Reached target default.target - Main User Target. Mar 2 13:30:02.054579 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:30:02.054855 systemd[1722]: Startup finished in 223ms. Mar 2 13:30:02.067480 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:30:02.069282 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:30:03.554854 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:30:03.559739 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 2 13:30:03.576094 coreos-metadata[1607]: Mar 02 13:30:03.575 WARN failed to locate config-drive, using the metadata service API instead Mar 2 13:30:03.576947 coreos-metadata[1530]: Mar 02 13:30:03.574 WARN failed to locate config-drive, using the metadata service API instead Mar 2 13:30:03.603215 coreos-metadata[1607]: Mar 02 13:30:03.603 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 2 13:30:03.603379 coreos-metadata[1530]: Mar 02 13:30:03.603 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 2 13:30:03.609940 coreos-metadata[1530]: Mar 02 13:30:03.609 INFO Fetch failed with 404: resource not found Mar 2 13:30:03.609940 coreos-metadata[1530]: Mar 02 13:30:03.609 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 2 13:30:03.610787 coreos-metadata[1530]: Mar 02 13:30:03.610 INFO Fetch successful Mar 2 13:30:03.611092 coreos-metadata[1530]: Mar 02 13:30:03.611 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 2 13:30:03.629664 coreos-metadata[1530]: Mar 02 13:30:03.629 INFO Fetch successful Mar 2 13:30:03.629917 coreos-metadata[1530]: Mar 02 13:30:03.629 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 2 13:30:03.642671 coreos-metadata[1607]: Mar 02 13:30:03.642 INFO Fetch successful Mar 2 13:30:03.643008 coreos-metadata[1607]: Mar 02 13:30:03.642 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 2 13:30:03.644969 coreos-metadata[1530]: Mar 02 13:30:03.644 INFO Fetch successful Mar 2 13:30:03.645243 coreos-metadata[1530]: Mar 02 13:30:03.645 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 2 13:30:03.662740 coreos-metadata[1530]: Mar 02 13:30:03.662 INFO Fetch successful Mar 2 13:30:03.662940 coreos-metadata[1530]: Mar 02 13:30:03.662 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 2 13:30:03.670722 coreos-metadata[1607]: Mar 02 13:30:03.670 INFO Fetch successful Mar 2 13:30:03.673249 unknown[1607]: wrote ssh authorized keys file for user: core Mar 2 13:30:03.699627 coreos-metadata[1530]: Mar 02 13:30:03.697 INFO Fetch successful Mar 2 13:30:03.705730 update-ssh-keys[1757]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:30:03.706836 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 2 13:30:03.712288 systemd[1]: Finished sshkeys.service. Mar 2 13:30:03.731731 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 2 13:30:03.732999 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:30:03.733392 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:30:03.733889 systemd[1]: Startup finished in 3.557s (kernel) + 15.150s (initrd) + 12.991s (userspace) = 31.699s. Mar 2 13:30:05.625334 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:30:05.628116 systemd[1]: Started sshd@0-10.230.30.118:22-68.220.241.50:37130.service - OpenSSH per-connection server daemon (68.220.241.50:37130). Mar 2 13:30:06.186825 sshd[1766]: Accepted publickey for core from 68.220.241.50 port 37130 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:06.189458 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:06.198953 systemd-logind[1545]: New session 3 of user core. Mar 2 13:30:06.206215 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:30:06.571046 systemd[1]: Started sshd@1-10.230.30.118:22-68.220.241.50:37136.service - OpenSSH per-connection server daemon (68.220.241.50:37136). Mar 2 13:30:07.076749 sshd[1772]: Accepted publickey for core from 68.220.241.50 port 37136 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:07.078086 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:07.085476 systemd-logind[1545]: New session 4 of user core. Mar 2 13:30:07.097270 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:30:07.354625 sshd[1775]: Connection closed by 68.220.241.50 port 37136 Mar 2 13:30:07.355492 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:07.363166 systemd[1]: sshd@1-10.230.30.118:22-68.220.241.50:37136.service: Deactivated successfully. Mar 2 13:30:07.366923 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:30:07.369625 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:30:07.372611 systemd-logind[1545]: Removed session 4. Mar 2 13:30:07.472068 systemd[1]: Started sshd@2-10.230.30.118:22-68.220.241.50:37152.service - OpenSSH per-connection server daemon (68.220.241.50:37152). Mar 2 13:30:08.006415 sshd[1781]: Accepted publickey for core from 68.220.241.50 port 37152 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:08.008451 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:08.017738 systemd-logind[1545]: New session 5 of user core. Mar 2 13:30:08.026213 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:30:08.290462 sshd[1784]: Connection closed by 68.220.241.50 port 37152 Mar 2 13:30:08.291586 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:08.298992 systemd[1]: sshd@2-10.230.30.118:22-68.220.241.50:37152.service: Deactivated successfully. Mar 2 13:30:08.301538 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:30:08.303974 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:30:08.306189 systemd-logind[1545]: Removed session 5. Mar 2 13:30:08.396274 systemd[1]: Started sshd@3-10.230.30.118:22-68.220.241.50:37160.service - OpenSSH per-connection server daemon (68.220.241.50:37160). Mar 2 13:30:08.695895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:30:08.699022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:08.905097 sshd[1790]: Accepted publickey for core from 68.220.241.50 port 37160 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:08.907526 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:08.920090 systemd-logind[1545]: New session 6 of user core. Mar 2 13:30:08.925152 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:30:08.930342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:08.950169 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:09.023214 kubelet[1801]: E0302 13:30:09.023103 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:09.028597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:09.028929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:09.029934 systemd[1]: kubelet.service: Consumed 271ms CPU time, 109M memory peak. Mar 2 13:30:09.187733 sshd[1802]: Connection closed by 68.220.241.50 port 37160 Mar 2 13:30:09.186986 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:09.193194 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:30:09.193469 systemd[1]: sshd@3-10.230.30.118:22-68.220.241.50:37160.service: Deactivated successfully. Mar 2 13:30:09.195993 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:30:09.199676 systemd-logind[1545]: Removed session 6. Mar 2 13:30:09.299112 systemd[1]: Started sshd@4-10.230.30.118:22-68.220.241.50:37174.service - OpenSSH per-connection server daemon (68.220.241.50:37174). Mar 2 13:30:09.841532 sshd[1813]: Accepted publickey for core from 68.220.241.50 port 37174 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:09.843541 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:09.852692 systemd-logind[1545]: New session 7 of user core. Mar 2 13:30:09.862086 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:30:10.049992 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:30:10.050441 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:10.065410 sudo[1817]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:10.162306 sshd[1816]: Connection closed by 68.220.241.50 port 37174 Mar 2 13:30:10.163230 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:10.168901 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:30:10.170356 systemd[1]: sshd@4-10.230.30.118:22-68.220.241.50:37174.service: Deactivated successfully. Mar 2 13:30:10.172494 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:30:10.176605 systemd-logind[1545]: Removed session 7. Mar 2 13:30:10.262918 systemd[1]: Started sshd@5-10.230.30.118:22-68.220.241.50:37184.service - OpenSSH per-connection server daemon (68.220.241.50:37184). Mar 2 13:30:10.763568 sshd[1823]: Accepted publickey for core from 68.220.241.50 port 37184 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:10.764393 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:10.772906 systemd-logind[1545]: New session 8 of user core. Mar 2 13:30:10.782124 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:30:10.951112 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:30:10.951554 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:10.960476 sudo[1828]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:10.968983 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 2 13:30:10.969423 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:10.986260 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 13:30:11.051905 augenrules[1850]: No rules Mar 2 13:30:11.053289 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:30:11.053635 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 13:30:11.056269 sudo[1827]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:11.149431 sshd[1826]: Connection closed by 68.220.241.50 port 37184 Mar 2 13:30:11.150131 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:11.155903 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:30:11.157906 systemd[1]: sshd@5-10.230.30.118:22-68.220.241.50:37184.service: Deactivated successfully. Mar 2 13:30:11.161611 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:30:11.164911 systemd-logind[1545]: Removed session 8. Mar 2 13:30:11.258029 systemd[1]: Started sshd@6-10.230.30.118:22-68.220.241.50:37200.service - OpenSSH per-connection server daemon (68.220.241.50:37200). Mar 2 13:30:11.787201 sshd[1859]: Accepted publickey for core from 68.220.241.50 port 37200 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:30:11.789660 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:11.799837 systemd-logind[1545]: New session 9 of user core. Mar 2 13:30:11.807054 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:30:11.983413 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:30:11.983887 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:12.506326 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:30:12.528593 (dockerd)[1882]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:30:12.896762 dockerd[1882]: time="2026-03-02T13:30:12.894757280Z" level=info msg="Starting up" Mar 2 13:30:12.899725 dockerd[1882]: time="2026-03-02T13:30:12.899227358Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 2 13:30:12.920953 dockerd[1882]: time="2026-03-02T13:30:12.920884893Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 2 13:30:12.953492 systemd[1]: var-lib-docker-metacopy\x2dcheck1284242121-merged.mount: Deactivated successfully. Mar 2 13:30:12.980323 dockerd[1882]: time="2026-03-02T13:30:12.979945790Z" level=info msg="Loading containers: start." Mar 2 13:30:12.997762 kernel: Initializing XFRM netlink socket Mar 2 13:30:13.293441 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. Mar 2 13:30:13.361546 systemd-networkd[1503]: docker0: Link UP Mar 2 13:30:13.366913 dockerd[1882]: time="2026-03-02T13:30:13.366848901Z" level=info msg="Loading containers: done." Mar 2 13:30:13.386080 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3202637228-merged.mount: Deactivated successfully. Mar 2 13:30:13.388558 dockerd[1882]: time="2026-03-02T13:30:13.388479139Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:30:13.388678 dockerd[1882]: time="2026-03-02T13:30:13.388640438Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 2 13:30:13.391770 dockerd[1882]: time="2026-03-02T13:30:13.391076180Z" level=info msg="Initializing buildkit" Mar 2 13:30:13.421931 dockerd[1882]: time="2026-03-02T13:30:13.421857204Z" level=info msg="Completed buildkit initialization" Mar 2 13:30:13.433719 dockerd[1882]: time="2026-03-02T13:30:13.433566431Z" level=info msg="Daemon has completed initialization" Mar 2 13:30:13.433888 dockerd[1882]: time="2026-03-02T13:30:13.433813163Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:30:13.435407 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:30:14.097494 containerd[1604]: time="2026-03-02T13:30:14.097225888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 13:30:15.307310 systemd-resolved[1449]: Clock change detected. Flushing caches. Mar 2 13:30:15.308216 systemd-timesyncd[1483]: Contacted time server [2a01:7e00::f03c:94ff:fee2:c5f7]:123 (2.flatcar.pool.ntp.org). Mar 2 13:30:15.308673 systemd-timesyncd[1483]: Initial clock synchronization to Mon 2026-03-02 13:30:15.306974 UTC. Mar 2 13:30:15.398958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738505351.mount: Deactivated successfully. Mar 2 13:30:18.968472 containerd[1604]: time="2026-03-02T13:30:18.968359232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:18.970941 containerd[1604]: time="2026-03-02T13:30:18.970635625Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 2 13:30:18.971860 containerd[1604]: time="2026-03-02T13:30:18.971819514Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:18.975300 containerd[1604]: time="2026-03-02T13:30:18.975262752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:18.976851 containerd[1604]: time="2026-03-02T13:30:18.976811007Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 4.33901659s" Mar 2 13:30:18.976959 containerd[1604]: time="2026-03-02T13:30:18.976867705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 2 13:30:18.977891 containerd[1604]: time="2026-03-02T13:30:18.977683694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 13:30:19.736466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:30:19.742516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:19.986434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:19.998528 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:20.057048 kubelet[2162]: E0302 13:30:20.056962 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:20.059524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:20.060056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:20.060855 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.7M memory peak. Mar 2 13:30:21.274235 containerd[1604]: time="2026-03-02T13:30:21.272801752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:21.276196 containerd[1604]: time="2026-03-02T13:30:21.275839068Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 2 13:30:21.276995 containerd[1604]: time="2026-03-02T13:30:21.276949526Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:21.281542 containerd[1604]: time="2026-03-02T13:30:21.281487265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:21.284287 containerd[1604]: time="2026-03-02T13:30:21.284214822Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.306487946s" Mar 2 13:30:21.284287 containerd[1604]: time="2026-03-02T13:30:21.284275571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 2 13:30:21.285802 containerd[1604]: time="2026-03-02T13:30:21.285456645Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 13:30:23.138531 containerd[1604]: time="2026-03-02T13:30:23.138458164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:23.139955 containerd[1604]: time="2026-03-02T13:30:23.139763049Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 2 13:30:23.140586 containerd[1604]: time="2026-03-02T13:30:23.140544253Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:23.144054 containerd[1604]: time="2026-03-02T13:30:23.144016422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:23.145685 containerd[1604]: time="2026-03-02T13:30:23.145603223Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.860054465s" Mar 2 13:30:23.145983 containerd[1604]: time="2026-03-02T13:30:23.145781619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 2 13:30:23.147442 containerd[1604]: time="2026-03-02T13:30:23.147231511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 13:30:24.910589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928755633.mount: Deactivated successfully. Mar 2 13:30:25.742527 containerd[1604]: time="2026-03-02T13:30:25.742452228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:25.744461 containerd[1604]: time="2026-03-02T13:30:25.744264103Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 2 13:30:25.746385 containerd[1604]: time="2026-03-02T13:30:25.746328145Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:25.749123 containerd[1604]: time="2026-03-02T13:30:25.749081838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:25.750116 containerd[1604]: time="2026-03-02T13:30:25.750077792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.602798583s" Mar 2 13:30:25.750254 containerd[1604]: time="2026-03-02T13:30:25.750227704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 2 13:30:25.750941 containerd[1604]: time="2026-03-02T13:30:25.750905990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 13:30:26.353039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151940621.mount: Deactivated successfully. Mar 2 13:30:28.041240 containerd[1604]: time="2026-03-02T13:30:28.041136617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:28.042585 containerd[1604]: time="2026-03-02T13:30:28.042549564Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 2 13:30:28.044048 containerd[1604]: time="2026-03-02T13:30:28.043975500Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:28.047297 containerd[1604]: time="2026-03-02T13:30:28.047263541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:28.049076 containerd[1604]: time="2026-03-02T13:30:28.048734770Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.297786079s" Mar 2 13:30:28.049076 containerd[1604]: time="2026-03-02T13:30:28.048781291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 2 13:30:28.050022 containerd[1604]: time="2026-03-02T13:30:28.049976725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 13:30:28.610994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860905251.mount: Deactivated successfully. Mar 2 13:30:28.711192 containerd[1604]: time="2026-03-02T13:30:28.710778124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:30:28.712342 containerd[1604]: time="2026-03-02T13:30:28.712311021Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 2 13:30:28.713518 containerd[1604]: time="2026-03-02T13:30:28.713475147Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:30:28.721568 containerd[1604]: time="2026-03-02T13:30:28.721387366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:30:28.722664 containerd[1604]: time="2026-03-02T13:30:28.722478233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 672.457273ms" Mar 2 13:30:28.722664 containerd[1604]: time="2026-03-02T13:30:28.722520428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 2 13:30:28.724128 containerd[1604]: time="2026-03-02T13:30:28.723870079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 13:30:28.762311 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 2 13:30:29.259386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531981215.mount: Deactivated successfully. Mar 2 13:30:30.236395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:30:30.240529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:30.496039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:30.511992 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:30.585068 kubelet[2297]: E0302 13:30:30.585007 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:30.587363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:30.587623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:30.588124 systemd[1]: kubelet.service: Consumed 244ms CPU time, 111.1M memory peak. Mar 2 13:30:32.341985 containerd[1604]: time="2026-03-02T13:30:32.341892168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:32.343349 containerd[1604]: time="2026-03-02T13:30:32.343296173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 2 13:30:32.344234 containerd[1604]: time="2026-03-02T13:30:32.344174633Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:32.348755 containerd[1604]: time="2026-03-02T13:30:32.348436213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:32.350179 containerd[1604]: time="2026-03-02T13:30:32.349879479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.625942176s" Mar 2 13:30:32.350179 containerd[1604]: time="2026-03-02T13:30:32.349926790Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 2 13:30:32.469466 systemd[1]: Started sshd@7-10.230.30.118:22-152.32.252.65:40356.service - OpenSSH per-connection server daemon (152.32.252.65:40356). Mar 2 13:30:33.768582 sshd[2329]: Received disconnect from 152.32.252.65 port 40356:11: Bye Bye [preauth] Mar 2 13:30:33.768582 sshd[2329]: Disconnected from authenticating user root 152.32.252.65 port 40356 [preauth] Mar 2 13:30:33.774687 systemd[1]: sshd@7-10.230.30.118:22-152.32.252.65:40356.service: Deactivated successfully. Mar 2 13:30:36.758315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:36.758959 systemd[1]: kubelet.service: Consumed 244ms CPU time, 111.1M memory peak. Mar 2 13:30:36.763649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:36.805085 systemd[1]: Reload requested from client PID 2357 ('systemctl') (unit session-9.scope)... Mar 2 13:30:36.805138 systemd[1]: Reloading... Mar 2 13:30:36.979215 zram_generator::config[2402]: No configuration found. Mar 2 13:30:37.335234 systemd[1]: Reloading finished in 529 ms. Mar 2 13:30:37.423905 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:30:37.424647 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:30:37.425017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:37.425096 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98M memory peak. Mar 2 13:30:37.428042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:37.633953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:37.647901 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:30:37.727593 kubelet[2469]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:30:37.728199 kubelet[2469]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:30:37.728314 kubelet[2469]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:30:37.728561 kubelet[2469]: I0302 13:30:37.728504 2469 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:30:38.951838 kubelet[2469]: I0302 13:30:38.951777 2469 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:30:38.951838 kubelet[2469]: I0302 13:30:38.951830 2469 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:30:38.952692 kubelet[2469]: I0302 13:30:38.952226 2469 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:30:38.993113 kubelet[2469]: I0302 13:30:38.993059 2469 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:30:38.993868 kubelet[2469]: E0302 13:30:38.993080 2469 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.30.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:30:39.014897 kubelet[2469]: I0302 13:30:39.014848 2469 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 13:30:39.032537 kubelet[2469]: I0302 13:30:39.032462 2469 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:30:39.034187 kubelet[2469]: I0302 13:30:39.033949 2469 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:30:39.037405 kubelet[2469]: I0302 13:30:39.034021 2469 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-u4d8l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:30:39.037405 kubelet[2469]: I0302 13:30:39.037404 2469 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:30:39.037840 kubelet[2469]: I0302 13:30:39.037425 2469 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:30:39.037840 kubelet[2469]: I0302 13:30:39.037706 2469 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:30:39.043339 kubelet[2469]: I0302 13:30:39.043259 2469 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:30:39.043511 kubelet[2469]: I0302 13:30:39.043475 2469 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:30:39.043571 kubelet[2469]: I0302 13:30:39.043552 2469 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:30:39.043630 kubelet[2469]: I0302 13:30:39.043593 2469 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:30:39.049956 kubelet[2469]: E0302 13:30:39.049651 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.30.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-u4d8l.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:30:39.050669 kubelet[2469]: E0302 13:30:39.050271 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.30.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:30:39.051218 kubelet[2469]: I0302 13:30:39.051184 2469 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 13:30:39.051967 kubelet[2469]: I0302 13:30:39.051937 2469 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:30:39.052772 kubelet[2469]: W0302 13:30:39.052743 2469 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:30:39.067846 kubelet[2469]: I0302 13:30:39.067809 2469 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:30:39.068579 kubelet[2469]: I0302 13:30:39.067897 2469 server.go:1289] "Started kubelet" Mar 2 13:30:39.070529 kubelet[2469]: I0302 13:30:39.070103 2469 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:30:39.070942 kubelet[2469]: I0302 13:30:39.070898 2469 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:30:39.074089 kubelet[2469]: I0302 13:30:39.073571 2469 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:30:39.077691 kubelet[2469]: I0302 13:30:39.077666 2469 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:30:39.084776 kubelet[2469]: I0302 13:30:39.084731 2469 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:30:39.086185 kubelet[2469]: I0302 13:30:39.085651 2469 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:30:39.086468 kubelet[2469]: E0302 13:30:39.082561 2469 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.30.118:6443/api/v1/namespaces/default/events\": dial tcp 10.230.30.118:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-u4d8l.gb1.brightbox.com.18990960a6da62b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-u4d8l.gb1.brightbox.com,UID:srv-u4d8l.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-u4d8l.gb1.brightbox.com,},FirstTimestamp:2026-03-02 13:30:39.0678412 +0000 UTC m=+1.393366876,LastTimestamp:2026-03-02 13:30:39.0678412 +0000 UTC m=+1.393366876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-u4d8l.gb1.brightbox.com,}" Mar 2 13:30:39.088799 kubelet[2469]: I0302 13:30:39.088775 2469 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:30:39.089181 kubelet[2469]: I0302 13:30:39.089140 2469 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:30:39.092834 kubelet[2469]: I0302 13:30:39.092812 2469 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:30:39.093147 kubelet[2469]: E0302 13:30:39.093120 2469 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" Mar 2 13:30:39.101187 kubelet[2469]: I0302 13:30:39.099430 2469 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:30:39.101187 kubelet[2469]: I0302 13:30:39.099547 2469 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:30:39.103210 kubelet[2469]: I0302 13:30:39.103183 2469 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:30:39.121506 kubelet[2469]: E0302 13:30:39.121443 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.30.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:30:39.122355 kubelet[2469]: E0302 13:30:39.122304 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-u4d8l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.118:6443: connect: connection refused" interval="200ms" Mar 2 13:30:39.132838 kubelet[2469]: I0302 13:30:39.132754 2469 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:30:39.136211 kubelet[2469]: I0302 13:30:39.135919 2469 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:30:39.136211 kubelet[2469]: I0302 13:30:39.135965 2469 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:30:39.136211 kubelet[2469]: I0302 13:30:39.136011 2469 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:30:39.136211 kubelet[2469]: I0302 13:30:39.136034 2469 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:30:39.136211 kubelet[2469]: E0302 13:30:39.136093 2469 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:30:39.150390 kubelet[2469]: I0302 13:30:39.150350 2469 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:30:39.150995 kubelet[2469]: I0302 13:30:39.150602 2469 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:30:39.150995 kubelet[2469]: I0302 13:30:39.150647 2469 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:30:39.151498 kubelet[2469]: E0302 13:30:39.151467 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.30.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:30:39.153467 kubelet[2469]: I0302 13:30:39.153443 2469 policy_none.go:49] "None policy: Start" Mar 2 13:30:39.153587 kubelet[2469]: I0302 13:30:39.153567 2469 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:30:39.153732 kubelet[2469]: I0302 13:30:39.153715 2469 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:30:39.163258 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:30:39.185265 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:30:39.191336 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:30:39.193378 kubelet[2469]: E0302 13:30:39.193337 2469 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" Mar 2 13:30:39.203345 kubelet[2469]: E0302 13:30:39.203152 2469 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:30:39.204008 kubelet[2469]: I0302 13:30:39.203572 2469 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:30:39.204008 kubelet[2469]: I0302 13:30:39.203612 2469 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:30:39.206209 kubelet[2469]: I0302 13:30:39.205270 2469 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:30:39.206472 kubelet[2469]: E0302 13:30:39.206391 2469 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:30:39.206581 kubelet[2469]: E0302 13:30:39.206484 2469 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-u4d8l.gb1.brightbox.com\" not found" Mar 2 13:30:39.270255 systemd[1]: Created slice kubepods-burstable-pod01af3b129f72416bc333e34e18195d93.slice - libcontainer container kubepods-burstable-pod01af3b129f72416bc333e34e18195d93.slice. Mar 2 13:30:39.279568 kubelet[2469]: E0302 13:30:39.279516 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.283247 systemd[1]: Created slice kubepods-burstable-pod1c8be079247f2142d5741c450bbff0b2.slice - libcontainer container kubepods-burstable-pod1c8be079247f2142d5741c450bbff0b2.slice. Mar 2 13:30:39.292144 kubelet[2469]: E0302 13:30:39.292091 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.297097 systemd[1]: Created slice kubepods-burstable-pod2e52c2e21e030efe1b73dc21c3e36552.slice - libcontainer container kubepods-burstable-pod2e52c2e21e030efe1b73dc21c3e36552.slice. Mar 2 13:30:39.300982 kubelet[2469]: E0302 13:30:39.300389 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.307966 kubelet[2469]: I0302 13:30:39.307920 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.308670 kubelet[2469]: E0302 13:30:39.308620 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.118:6443/api/v1/nodes\": dial tcp 10.230.30.118:6443: connect: connection refused" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.323552 kubelet[2469]: E0302 13:30:39.323470 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-u4d8l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.118:6443: connect: connection refused" interval="400ms" Mar 2 13:30:39.401520 kubelet[2469]: I0302 13:30:39.401410 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c8be079247f2142d5741c450bbff0b2-kubeconfig\") pod \"kube-scheduler-srv-u4d8l.gb1.brightbox.com\" (UID: \"1c8be079247f2142d5741c450bbff0b2\") " pod="kube-system/kube-scheduler-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401520 kubelet[2469]: I0302 13:30:39.401511 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e52c2e21e030efe1b73dc21c3e36552-k8s-certs\") pod \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" (UID: \"2e52c2e21e030efe1b73dc21c3e36552\") " pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401520 kubelet[2469]: I0302 13:30:39.401545 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-ca-certs\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401885 kubelet[2469]: I0302 13:30:39.401574 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-flexvolume-dir\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401885 kubelet[2469]: I0302 13:30:39.401612 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401885 kubelet[2469]: I0302 13:30:39.401644 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e52c2e21e030efe1b73dc21c3e36552-ca-certs\") pod \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" (UID: \"2e52c2e21e030efe1b73dc21c3e36552\") " pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401885 kubelet[2469]: I0302 13:30:39.401672 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e52c2e21e030efe1b73dc21c3e36552-usr-share-ca-certificates\") pod \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" (UID: \"2e52c2e21e030efe1b73dc21c3e36552\") " pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.401885 kubelet[2469]: I0302 13:30:39.401746 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-k8s-certs\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.402130 kubelet[2469]: I0302 13:30:39.401780 2469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-kubeconfig\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.512696 kubelet[2469]: I0302 13:30:39.512644 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.513381 kubelet[2469]: E0302 13:30:39.513329 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.118:6443/api/v1/nodes\": dial tcp 10.230.30.118:6443: connect: connection refused" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.583084 containerd[1604]: time="2026-03-02T13:30:39.582980319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-u4d8l.gb1.brightbox.com,Uid:01af3b129f72416bc333e34e18195d93,Namespace:kube-system,Attempt:0,}" Mar 2 13:30:39.600774 containerd[1604]: time="2026-03-02T13:30:39.600456236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-u4d8l.gb1.brightbox.com,Uid:1c8be079247f2142d5741c450bbff0b2,Namespace:kube-system,Attempt:0,}" Mar 2 13:30:39.602472 containerd[1604]: time="2026-03-02T13:30:39.602232430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-u4d8l.gb1.brightbox.com,Uid:2e52c2e21e030efe1b73dc21c3e36552,Namespace:kube-system,Attempt:0,}" Mar 2 13:30:39.733820 kubelet[2469]: E0302 13:30:39.732588 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-u4d8l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.118:6443: connect: connection refused" interval="800ms" Mar 2 13:30:39.735434 containerd[1604]: time="2026-03-02T13:30:39.735380639Z" level=info msg="connecting to shim 0becb9021ee0cb8d1130ef2894717b2a721b080d061e1395a1aadd009f208c80" address="unix:///run/containerd/s/6f31aa9d67b7438aed9fe32acd4933aa1fb508e082866fd4d607badbcb7a2842" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:30:39.736894 containerd[1604]: time="2026-03-02T13:30:39.736859921Z" level=info msg="connecting to shim 9539e7255089804fb2a52e5a0eb447639857b62be3d78a7a30fe0b407d371344" address="unix:///run/containerd/s/d8111e52e7cbcaeebf1e60781e5e2acaf0ed49c3ac41feb9750d56c2fbb8662f" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:30:39.763147 containerd[1604]: time="2026-03-02T13:30:39.761425744Z" level=info msg="connecting to shim 3488938a92b1ac6af547815f6c60e0043d9bc65481a20ebb3a6ad3c3ad52d6fd" address="unix:///run/containerd/s/cf790d33586fba413e87521cfe1af8f915794a0e019d765ee4bfd0941f1c6797" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:30:39.866586 systemd[1]: Started cri-containerd-9539e7255089804fb2a52e5a0eb447639857b62be3d78a7a30fe0b407d371344.scope - libcontainer container 9539e7255089804fb2a52e5a0eb447639857b62be3d78a7a30fe0b407d371344. Mar 2 13:30:39.877188 systemd[1]: Started cri-containerd-0becb9021ee0cb8d1130ef2894717b2a721b080d061e1395a1aadd009f208c80.scope - libcontainer container 0becb9021ee0cb8d1130ef2894717b2a721b080d061e1395a1aadd009f208c80. Mar 2 13:30:39.880722 systemd[1]: Started cri-containerd-3488938a92b1ac6af547815f6c60e0043d9bc65481a20ebb3a6ad3c3ad52d6fd.scope - libcontainer container 3488938a92b1ac6af547815f6c60e0043d9bc65481a20ebb3a6ad3c3ad52d6fd. Mar 2 13:30:39.897491 kubelet[2469]: E0302 13:30:39.897419 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.30.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:30:39.918956 kubelet[2469]: I0302 13:30:39.918897 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.919635 kubelet[2469]: E0302 13:30:39.919593 2469 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.118:6443/api/v1/nodes\": dial tcp 10.230.30.118:6443: connect: connection refused" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:39.943233 kubelet[2469]: E0302 13:30:39.943135 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.30.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:30:40.025741 containerd[1604]: time="2026-03-02T13:30:40.024965871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-u4d8l.gb1.brightbox.com,Uid:2e52c2e21e030efe1b73dc21c3e36552,Namespace:kube-system,Attempt:0,} returns sandbox id \"9539e7255089804fb2a52e5a0eb447639857b62be3d78a7a30fe0b407d371344\"" Mar 2 13:30:40.038194 containerd[1604]: time="2026-03-02T13:30:40.037927696Z" level=info msg="CreateContainer within sandbox \"9539e7255089804fb2a52e5a0eb447639857b62be3d78a7a30fe0b407d371344\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:30:40.047455 containerd[1604]: time="2026-03-02T13:30:40.047394035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-u4d8l.gb1.brightbox.com,Uid:01af3b129f72416bc333e34e18195d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"3488938a92b1ac6af547815f6c60e0043d9bc65481a20ebb3a6ad3c3ad52d6fd\"" Mar 2 13:30:40.050678 containerd[1604]: time="2026-03-02T13:30:40.050628145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-u4d8l.gb1.brightbox.com,Uid:1c8be079247f2142d5741c450bbff0b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0becb9021ee0cb8d1130ef2894717b2a721b080d061e1395a1aadd009f208c80\"" Mar 2 13:30:40.058053 containerd[1604]: time="2026-03-02T13:30:40.057922446Z" level=info msg="CreateContainer within sandbox \"0becb9021ee0cb8d1130ef2894717b2a721b080d061e1395a1aadd009f208c80\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:30:40.061029 containerd[1604]: time="2026-03-02T13:30:40.060991971Z" level=info msg="CreateContainer within sandbox \"3488938a92b1ac6af547815f6c60e0043d9bc65481a20ebb3a6ad3c3ad52d6fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:30:40.065696 containerd[1604]: time="2026-03-02T13:30:40.065664094Z" level=info msg="Container da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:30:40.071697 containerd[1604]: time="2026-03-02T13:30:40.071667049Z" level=info msg="Container 0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:30:40.074717 containerd[1604]: time="2026-03-02T13:30:40.074682594Z" level=info msg="Container ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:30:40.078218 containerd[1604]: time="2026-03-02T13:30:40.078024949Z" level=info msg="CreateContainer within sandbox \"9539e7255089804fb2a52e5a0eb447639857b62be3d78a7a30fe0b407d371344\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712\"" Mar 2 13:30:40.080636 containerd[1604]: time="2026-03-02T13:30:40.080586768Z" level=info msg="StartContainer for \"da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712\"" Mar 2 13:30:40.082215 containerd[1604]: time="2026-03-02T13:30:40.082045252Z" level=info msg="CreateContainer within sandbox \"3488938a92b1ac6af547815f6c60e0043d9bc65481a20ebb3a6ad3c3ad52d6fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48\"" Mar 2 13:30:40.083180 containerd[1604]: time="2026-03-02T13:30:40.082822755Z" level=info msg="StartContainer for \"0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48\"" Mar 2 13:30:40.084591 containerd[1604]: time="2026-03-02T13:30:40.084559274Z" level=info msg="connecting to shim da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712" address="unix:///run/containerd/s/d8111e52e7cbcaeebf1e60781e5e2acaf0ed49c3ac41feb9750d56c2fbb8662f" protocol=ttrpc version=3 Mar 2 13:30:40.086266 containerd[1604]: time="2026-03-02T13:30:40.086209657Z" level=info msg="CreateContainer within sandbox \"0becb9021ee0cb8d1130ef2894717b2a721b080d061e1395a1aadd009f208c80\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185\"" Mar 2 13:30:40.086849 containerd[1604]: time="2026-03-02T13:30:40.086815589Z" level=info msg="connecting to shim 0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48" address="unix:///run/containerd/s/cf790d33586fba413e87521cfe1af8f915794a0e019d765ee4bfd0941f1c6797" protocol=ttrpc version=3 Mar 2 13:30:40.087525 containerd[1604]: time="2026-03-02T13:30:40.087475260Z" level=info msg="StartContainer for \"ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185\"" Mar 2 13:30:40.088716 containerd[1604]: time="2026-03-02T13:30:40.088645100Z" level=info msg="connecting to shim ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185" address="unix:///run/containerd/s/6f31aa9d67b7438aed9fe32acd4933aa1fb508e082866fd4d607badbcb7a2842" protocol=ttrpc version=3 Mar 2 13:30:40.125413 systemd[1]: Started cri-containerd-ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185.scope - libcontainer container ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185. Mar 2 13:30:40.139434 systemd[1]: Started cri-containerd-0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48.scope - libcontainer container 0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48. Mar 2 13:30:40.141086 systemd[1]: Started cri-containerd-da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712.scope - libcontainer container da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712. Mar 2 13:30:40.278622 containerd[1604]: time="2026-03-02T13:30:40.277207961Z" level=info msg="StartContainer for \"0bb35514154e6a81ac6c335d7d57b82688ad35a879fbb84645901ba35845da48\" returns successfully" Mar 2 13:30:40.315938 containerd[1604]: time="2026-03-02T13:30:40.315794188Z" level=info msg="StartContainer for \"da0654aff9573e9e699811bd45c26caca67b429e9187e3ff625b1d6a91c6b712\" returns successfully" Mar 2 13:30:40.321330 containerd[1604]: time="2026-03-02T13:30:40.321085488Z" level=info msg="StartContainer for \"ce483ad762de31d216dfb7ff09ccad67a3b84d80ced462441c094043283ff185\" returns successfully" Mar 2 13:30:40.438461 kubelet[2469]: E0302 13:30:40.438027 2469 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.30.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.30.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:30:40.536862 kubelet[2469]: E0302 13:30:40.536364 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-u4d8l.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.118:6443: connect: connection refused" interval="1.6s" Mar 2 13:30:40.576527 update_engine[1546]: I20260302 13:30:40.575322 1546 update_attempter.cc:509] Updating boot flags... Mar 2 13:30:40.728215 kubelet[2469]: I0302 13:30:40.726591 2469 kubelet_node_status.go:75] "Attempting to register node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:41.224372 kubelet[2469]: E0302 13:30:41.223909 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:41.233323 kubelet[2469]: E0302 13:30:41.225268 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:41.234938 kubelet[2469]: E0302 13:30:41.234911 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:42.238722 kubelet[2469]: E0302 13:30:42.237357 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:42.238722 kubelet[2469]: E0302 13:30:42.237826 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:42.241763 kubelet[2469]: E0302 13:30:42.241598 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.240824 kubelet[2469]: E0302 13:30:43.240759 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.241737 kubelet[2469]: E0302 13:30:43.241522 2469 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.816750 kubelet[2469]: E0302 13:30:43.816590 2469 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-u4d8l.gb1.brightbox.com\" not found" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.820179 kubelet[2469]: I0302 13:30:43.819335 2469 kubelet_node_status.go:78] "Successfully registered node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.820179 kubelet[2469]: E0302 13:30:43.819395 2469 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-u4d8l.gb1.brightbox.com\": node \"srv-u4d8l.gb1.brightbox.com\" not found" Mar 2 13:30:43.895769 kubelet[2469]: I0302 13:30:43.895687 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.915731 kubelet[2469]: E0302 13:30:43.915661 2469 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.917523 kubelet[2469]: I0302 13:30:43.917205 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.922190 kubelet[2469]: E0302 13:30:43.922058 2469 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-u4d8l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.922190 kubelet[2469]: I0302 13:30:43.922095 2469 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:43.926896 kubelet[2469]: E0302 13:30:43.926844 2469 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:44.053953 kubelet[2469]: I0302 13:30:44.053799 2469 apiserver.go:52] "Watching apiserver" Mar 2 13:30:44.101245 kubelet[2469]: I0302 13:30:44.100463 2469 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:30:46.142629 systemd[1]: Reload requested from client PID 2766 ('systemctl') (unit session-9.scope)... Mar 2 13:30:46.143235 systemd[1]: Reloading... Mar 2 13:30:46.348210 zram_generator::config[2811]: No configuration found. Mar 2 13:30:46.770085 systemd[1]: Reloading finished in 626 ms. Mar 2 13:30:46.828083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:46.838656 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:30:46.839139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:46.839258 systemd[1]: kubelet.service: Consumed 1.992s CPU time, 129.6M memory peak. Mar 2 13:30:46.842319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:47.169293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:47.182838 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:30:47.291894 kubelet[2875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:30:47.291894 kubelet[2875]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:30:47.291894 kubelet[2875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:30:47.293694 kubelet[2875]: I0302 13:30:47.292001 2875 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:30:47.306811 kubelet[2875]: I0302 13:30:47.306753 2875 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:30:47.306811 kubelet[2875]: I0302 13:30:47.306798 2875 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:30:47.307332 kubelet[2875]: I0302 13:30:47.307222 2875 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:30:47.310001 kubelet[2875]: I0302 13:30:47.309633 2875 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:30:47.321357 kubelet[2875]: I0302 13:30:47.320120 2875 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:30:47.332525 kubelet[2875]: I0302 13:30:47.332480 2875 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 13:30:47.332813 sudo[2889]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 13:30:47.333977 sudo[2889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 13:30:47.350078 kubelet[2875]: I0302 13:30:47.347307 2875 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:30:47.350078 kubelet[2875]: I0302 13:30:47.347684 2875 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:30:47.350078 kubelet[2875]: I0302 13:30:47.347728 2875 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-u4d8l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:30:47.350078 kubelet[2875]: I0302 13:30:47.348056 2875 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:30:47.350542 kubelet[2875]: I0302 13:30:47.348074 2875 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:30:47.350542 kubelet[2875]: I0302 13:30:47.348843 2875 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:30:47.350542 kubelet[2875]: I0302 13:30:47.349317 2875 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:30:47.350542 kubelet[2875]: I0302 13:30:47.349382 2875 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:30:47.350542 kubelet[2875]: I0302 13:30:47.349481 2875 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:30:47.350542 kubelet[2875]: I0302 13:30:47.349517 2875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:30:47.357408 kubelet[2875]: I0302 13:30:47.357363 2875 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 13:30:47.360385 kubelet[2875]: I0302 13:30:47.360352 2875 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:30:47.369883 kubelet[2875]: I0302 13:30:47.368794 2875 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:30:47.369883 kubelet[2875]: I0302 13:30:47.368903 2875 server.go:1289] "Started kubelet" Mar 2 13:30:47.377998 kubelet[2875]: I0302 13:30:47.377686 2875 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:30:47.393111 kubelet[2875]: I0302 13:30:47.388820 2875 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:30:47.401639 kubelet[2875]: E0302 13:30:47.397838 2875 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:30:47.401639 kubelet[2875]: I0302 13:30:47.386345 2875 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:30:47.401639 kubelet[2875]: I0302 13:30:47.378267 2875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:30:47.401639 kubelet[2875]: I0302 13:30:47.399366 2875 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:30:47.401639 kubelet[2875]: I0302 13:30:47.385781 2875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:30:47.410676 kubelet[2875]: E0302 13:30:47.406777 2875 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-u4d8l.gb1.brightbox.com\" not found" Mar 2 13:30:47.412008 kubelet[2875]: I0302 13:30:47.411972 2875 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:30:47.415477 kubelet[2875]: I0302 13:30:47.412596 2875 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:30:47.416109 kubelet[2875]: I0302 13:30:47.416081 2875 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:30:47.422318 kubelet[2875]: I0302 13:30:47.418266 2875 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:30:47.423184 kubelet[2875]: I0302 13:30:47.423143 2875 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:30:47.424244 kubelet[2875]: I0302 13:30:47.424220 2875 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:30:47.424384 kubelet[2875]: I0302 13:30:47.424363 2875 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:30:47.424494 kubelet[2875]: I0302 13:30:47.424477 2875 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:30:47.427059 kubelet[2875]: E0302 13:30:47.427026 2875 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:30:47.439926 kubelet[2875]: I0302 13:30:47.439892 2875 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:30:47.445763 kubelet[2875]: I0302 13:30:47.444992 2875 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:30:47.459521 kubelet[2875]: I0302 13:30:47.456530 2875 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:30:47.527353 kubelet[2875]: E0302 13:30:47.527310 2875 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:30:47.584107 kubelet[2875]: I0302 13:30:47.584069 2875 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:30:47.584478 kubelet[2875]: I0302 13:30:47.584452 2875 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:30:47.584620 kubelet[2875]: I0302 13:30:47.584602 2875 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:30:47.585039 kubelet[2875]: I0302 13:30:47.585014 2875 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:30:47.585183 kubelet[2875]: I0302 13:30:47.585127 2875 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:30:47.585310 kubelet[2875]: I0302 13:30:47.585292 2875 policy_none.go:49] "None policy: Start" Mar 2 13:30:47.585444 kubelet[2875]: I0302 13:30:47.585423 2875 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:30:47.585566 kubelet[2875]: I0302 13:30:47.585548 2875 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:30:47.585814 kubelet[2875]: I0302 13:30:47.585793 2875 state_mem.go:75] "Updated machine memory state" Mar 2 13:30:47.601772 kubelet[2875]: E0302 13:30:47.601529 2875 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:30:47.603196 kubelet[2875]: I0302 13:30:47.602273 2875 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:30:47.603196 kubelet[2875]: I0302 13:30:47.602306 2875 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:30:47.609473 kubelet[2875]: I0302 13:30:47.609448 2875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:30:47.612447 kubelet[2875]: E0302 13:30:47.612419 2875 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:30:47.733064 kubelet[2875]: I0302 13:30:47.732879 2875 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.733426 kubelet[2875]: I0302 13:30:47.733405 2875 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.734554 kubelet[2875]: I0302 13:30:47.734378 2875 kubelet_node_status.go:75] "Attempting to register node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.734945 kubelet[2875]: I0302 13:30:47.734734 2875 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.744947 kubelet[2875]: I0302 13:30:47.744902 2875 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:30:47.750182 kubelet[2875]: I0302 13:30:47.748059 2875 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:30:47.756493 kubelet[2875]: I0302 13:30:47.756385 2875 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:30:47.756813 kubelet[2875]: I0302 13:30:47.756704 2875 kubelet_node_status.go:124] "Node was previously registered" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.756986 kubelet[2875]: I0302 13:30:47.756967 2875 kubelet_node_status.go:78] "Successfully registered node" node="srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.829740 kubelet[2875]: I0302 13:30:47.829535 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.829978 kubelet[2875]: I0302 13:30:47.829749 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e52c2e21e030efe1b73dc21c3e36552-ca-certs\") pod \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" (UID: \"2e52c2e21e030efe1b73dc21c3e36552\") " pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.829978 kubelet[2875]: I0302 13:30:47.829815 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-flexvolume-dir\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.829978 kubelet[2875]: I0302 13:30:47.829868 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-kubeconfig\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.829978 kubelet[2875]: I0302 13:30:47.829908 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c8be079247f2142d5741c450bbff0b2-kubeconfig\") pod \"kube-scheduler-srv-u4d8l.gb1.brightbox.com\" (UID: \"1c8be079247f2142d5741c450bbff0b2\") " pod="kube-system/kube-scheduler-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.829978 kubelet[2875]: I0302 13:30:47.829963 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e52c2e21e030efe1b73dc21c3e36552-k8s-certs\") pod \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" (UID: \"2e52c2e21e030efe1b73dc21c3e36552\") " pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.830281 kubelet[2875]: I0302 13:30:47.829992 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e52c2e21e030efe1b73dc21c3e36552-usr-share-ca-certificates\") pod \"kube-apiserver-srv-u4d8l.gb1.brightbox.com\" (UID: \"2e52c2e21e030efe1b73dc21c3e36552\") " pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.830281 kubelet[2875]: I0302 13:30:47.830055 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-ca-certs\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.830281 kubelet[2875]: I0302 13:30:47.830085 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01af3b129f72416bc333e34e18195d93-k8s-certs\") pod \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" (UID: \"01af3b129f72416bc333e34e18195d93\") " pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:47.998127 sudo[2889]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:48.352944 kubelet[2875]: I0302 13:30:48.352760 2875 apiserver.go:52] "Watching apiserver" Mar 2 13:30:48.415895 kubelet[2875]: I0302 13:30:48.415829 2875 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:30:48.542794 kubelet[2875]: I0302 13:30:48.541328 2875 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:48.560526 kubelet[2875]: I0302 13:30:48.560483 2875 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:30:48.560876 kubelet[2875]: E0302 13:30:48.560782 2875 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-u4d8l.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" Mar 2 13:30:48.603953 kubelet[2875]: I0302 13:30:48.603562 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-u4d8l.gb1.brightbox.com" podStartSLOduration=1.603525602 podStartE2EDuration="1.603525602s" podCreationTimestamp="2026-03-02 13:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:30:48.589580911 +0000 UTC m=+1.394852218" watchObservedRunningTime="2026-03-02 13:30:48.603525602 +0000 UTC m=+1.408796891" Mar 2 13:30:48.620281 kubelet[2875]: I0302 13:30:48.619708 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-u4d8l.gb1.brightbox.com" podStartSLOduration=1.6196847970000001 podStartE2EDuration="1.619684797s" podCreationTimestamp="2026-03-02 13:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:30:48.60481087 +0000 UTC m=+1.410082191" watchObservedRunningTime="2026-03-02 13:30:48.619684797 +0000 UTC m=+1.424956081" Mar 2 13:30:48.620281 kubelet[2875]: I0302 13:30:48.619852 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-u4d8l.gb1.brightbox.com" podStartSLOduration=1.6198440889999999 podStartE2EDuration="1.619844089s" podCreationTimestamp="2026-03-02 13:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:30:48.619138084 +0000 UTC m=+1.424409393" watchObservedRunningTime="2026-03-02 13:30:48.619844089 +0000 UTC m=+1.425115385" Mar 2 13:30:49.826138 sudo[1863]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:49.920737 sshd[1862]: Connection closed by 68.220.241.50 port 37200 Mar 2 13:30:49.923271 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:49.929712 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:30:49.930242 systemd[1]: sshd@6-10.230.30.118:22-68.220.241.50:37200.service: Deactivated successfully. Mar 2 13:30:49.934138 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:30:49.934621 systemd[1]: session-9.scope: Consumed 6.929s CPU time, 215.1M memory peak. Mar 2 13:30:49.939403 systemd-logind[1545]: Removed session 9. Mar 2 13:30:51.141806 kubelet[2875]: I0302 13:30:51.141754 2875 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:30:51.143487 containerd[1604]: time="2026-03-02T13:30:51.143299750Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:30:51.144481 kubelet[2875]: I0302 13:30:51.143519 2875 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:30:52.039191 systemd[1]: Created slice kubepods-besteffort-pod8f003c77_0b36_46a7_85ef_9f69493e28d4.slice - libcontainer container kubepods-besteffort-pod8f003c77_0b36_46a7_85ef_9f69493e28d4.slice. Mar 2 13:30:52.060937 systemd[1]: Created slice kubepods-burstable-pod534e8999_18ef_49e2_8e65_87338c77e12c.slice - libcontainer container kubepods-burstable-pod534e8999_18ef_49e2_8e65_87338c77e12c.slice. Mar 2 13:30:52.063450 kubelet[2875]: I0302 13:30:52.063397 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f003c77-0b36-46a7-85ef-9f69493e28d4-lib-modules\") pod \"kube-proxy-4z76r\" (UID: \"8f003c77-0b36-46a7-85ef-9f69493e28d4\") " pod="kube-system/kube-proxy-4z76r" Mar 2 13:30:52.063553 kubelet[2875]: I0302 13:30:52.063459 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-cgroup\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063553 kubelet[2875]: I0302 13:30:52.063494 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cni-path\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063553 kubelet[2875]: I0302 13:30:52.063537 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-net\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063699 kubelet[2875]: I0302 13:30:52.063569 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-run\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063699 kubelet[2875]: I0302 13:30:52.063593 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-hostproc\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063699 kubelet[2875]: I0302 13:30:52.063631 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-etc-cni-netd\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063699 kubelet[2875]: I0302 13:30:52.063661 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlq99\" (UniqueName: \"kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-kube-api-access-hlq99\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063889 kubelet[2875]: I0302 13:30:52.063693 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-bpf-maps\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063889 kubelet[2875]: I0302 13:30:52.063738 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-lib-modules\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063889 kubelet[2875]: I0302 13:30:52.063766 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/534e8999-18ef-49e2-8e65-87338c77e12c-clustermesh-secrets\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063889 kubelet[2875]: I0302 13:30:52.063791 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-kernel\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063889 kubelet[2875]: I0302 13:30:52.063827 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-xtables-lock\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.063889 kubelet[2875]: I0302 13:30:52.063854 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-config-path\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.065055 kubelet[2875]: I0302 13:30:52.063880 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-hubble-tls\") pod \"cilium-vgln8\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " pod="kube-system/cilium-vgln8" Mar 2 13:30:52.065055 kubelet[2875]: I0302 13:30:52.063905 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgv5\" (UniqueName: \"kubernetes.io/projected/8f003c77-0b36-46a7-85ef-9f69493e28d4-kube-api-access-rjgv5\") pod \"kube-proxy-4z76r\" (UID: \"8f003c77-0b36-46a7-85ef-9f69493e28d4\") " pod="kube-system/kube-proxy-4z76r" Mar 2 13:30:52.065055 kubelet[2875]: I0302 13:30:52.063931 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f003c77-0b36-46a7-85ef-9f69493e28d4-kube-proxy\") pod \"kube-proxy-4z76r\" (UID: \"8f003c77-0b36-46a7-85ef-9f69493e28d4\") " pod="kube-system/kube-proxy-4z76r" Mar 2 13:30:52.065055 kubelet[2875]: I0302 13:30:52.063961 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f003c77-0b36-46a7-85ef-9f69493e28d4-xtables-lock\") pod \"kube-proxy-4z76r\" (UID: \"8f003c77-0b36-46a7-85ef-9f69493e28d4\") " pod="kube-system/kube-proxy-4z76r" Mar 2 13:30:52.244797 systemd[1]: Created slice kubepods-besteffort-pod956d1a8f_c8bd_4ef2_abb7_6e2e674444af.slice - libcontainer container kubepods-besteffort-pod956d1a8f_c8bd_4ef2_abb7_6e2e674444af.slice. Mar 2 13:30:52.266907 kubelet[2875]: I0302 13:30:52.266647 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dtwcb\" (UID: \"956d1a8f-c8bd-4ef2-abb7-6e2e674444af\") " pod="kube-system/cilium-operator-6c4d7847fc-dtwcb" Mar 2 13:30:52.269066 kubelet[2875]: I0302 13:30:52.268694 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpw5j\" (UniqueName: \"kubernetes.io/projected/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-kube-api-access-kpw5j\") pod \"cilium-operator-6c4d7847fc-dtwcb\" (UID: \"956d1a8f-c8bd-4ef2-abb7-6e2e674444af\") " pod="kube-system/cilium-operator-6c4d7847fc-dtwcb" Mar 2 13:30:52.356736 containerd[1604]: time="2026-03-02T13:30:52.356597470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4z76r,Uid:8f003c77-0b36-46a7-85ef-9f69493e28d4,Namespace:kube-system,Attempt:0,}" Mar 2 13:30:52.371493 containerd[1604]: time="2026-03-02T13:30:52.371371749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vgln8,Uid:534e8999-18ef-49e2-8e65-87338c77e12c,Namespace:kube-system,Attempt:0,}" Mar 2 13:30:52.404926 containerd[1604]: time="2026-03-02T13:30:52.404806069Z" level=info msg="connecting to shim e2739082c284a59d3437d912acec2ae4ad442cc0ceee383ae9d08fce53ff245a" address="unix:///run/containerd/s/89e23576d3ea23c911249173f929cdbb4c190710973b2a0f1e5ba88ea1de865d" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:30:52.428486 containerd[1604]: time="2026-03-02T13:30:52.428346918Z" level=info msg="connecting to shim e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6" address="unix:///run/containerd/s/74148cb2622275063e51b1f92136c36ef2e50630a7bdaed95b67a6bbd14c34b8" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:30:52.459756 systemd[1]: Started cri-containerd-e2739082c284a59d3437d912acec2ae4ad442cc0ceee383ae9d08fce53ff245a.scope - libcontainer container e2739082c284a59d3437d912acec2ae4ad442cc0ceee383ae9d08fce53ff245a. Mar 2 13:30:52.496417 systemd[1]: Started cri-containerd-e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6.scope - libcontainer container e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6. Mar 2 13:30:52.537253 containerd[1604]: time="2026-03-02T13:30:52.537124905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4z76r,Uid:8f003c77-0b36-46a7-85ef-9f69493e28d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2739082c284a59d3437d912acec2ae4ad442cc0ceee383ae9d08fce53ff245a\"" Mar 2 13:30:52.551504 containerd[1604]: time="2026-03-02T13:30:52.551332398Z" level=info msg="CreateContainer within sandbox \"e2739082c284a59d3437d912acec2ae4ad442cc0ceee383ae9d08fce53ff245a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:30:52.553292 containerd[1604]: time="2026-03-02T13:30:52.553255962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dtwcb,Uid:956d1a8f-c8bd-4ef2-abb7-6e2e674444af,Namespace:kube-system,Attempt:0,}" Mar 2 13:30:52.576938 containerd[1604]: time="2026-03-02T13:30:52.576888596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vgln8,Uid:534e8999-18ef-49e2-8e65-87338c77e12c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\"" Mar 2 13:30:52.581257 containerd[1604]: time="2026-03-02T13:30:52.581210263Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:30:52.588202 containerd[1604]: time="2026-03-02T13:30:52.588017411Z" level=info msg="Container d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:30:52.602962 containerd[1604]: time="2026-03-02T13:30:52.602899946Z" level=info msg="CreateContainer within sandbox \"e2739082c284a59d3437d912acec2ae4ad442cc0ceee383ae9d08fce53ff245a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211\"" Mar 2 13:30:52.605979 containerd[1604]: time="2026-03-02T13:30:52.605913575Z" level=info msg="StartContainer for \"d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211\"" Mar 2 13:30:52.607349 containerd[1604]: time="2026-03-02T13:30:52.607248034Z" level=info msg="connecting to shim 1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a" address="unix:///run/containerd/s/c40e27a68e486798d54b8e408f390606f98a0e1cbbd6296b5ee096e1fa2eec0f" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:30:52.609399 containerd[1604]: time="2026-03-02T13:30:52.609024752Z" level=info msg="connecting to shim d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211" address="unix:///run/containerd/s/89e23576d3ea23c911249173f929cdbb4c190710973b2a0f1e5ba88ea1de865d" protocol=ttrpc version=3 Mar 2 13:30:52.649423 systemd[1]: Started cri-containerd-d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211.scope - libcontainer container d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211. Mar 2 13:30:52.664535 systemd[1]: Started cri-containerd-1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a.scope - libcontainer container 1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a. Mar 2 13:30:52.767646 containerd[1604]: time="2026-03-02T13:30:52.767051809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dtwcb,Uid:956d1a8f-c8bd-4ef2-abb7-6e2e674444af,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\"" Mar 2 13:30:52.777949 containerd[1604]: time="2026-03-02T13:30:52.777885068Z" level=info msg="StartContainer for \"d962c13fc56357c17a9a5d9821f2ad79de0a64ac4c938e590e690a2bad177211\" returns successfully" Mar 2 13:30:55.006728 kubelet[2875]: I0302 13:30:55.006638 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4z76r" podStartSLOduration=4.006611956 podStartE2EDuration="4.006611956s" podCreationTimestamp="2026-03-02 13:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:30:53.588507101 +0000 UTC m=+6.393778403" watchObservedRunningTime="2026-03-02 13:30:55.006611956 +0000 UTC m=+7.811883267" Mar 2 13:31:02.126956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581868526.mount: Deactivated successfully. Mar 2 13:31:05.406186 containerd[1604]: time="2026-03-02T13:31:05.406083855Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:05.408889 containerd[1604]: time="2026-03-02T13:31:05.408516657Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:31:05.408889 containerd[1604]: time="2026-03-02T13:31:05.408822045Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:05.411286 containerd[1604]: time="2026-03-02T13:31:05.411189997Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.829898386s" Mar 2 13:31:05.411286 containerd[1604]: time="2026-03-02T13:31:05.411235550Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:31:05.413802 containerd[1604]: time="2026-03-02T13:31:05.413550012Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:31:05.420977 containerd[1604]: time="2026-03-02T13:31:05.418589107Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:31:05.503415 containerd[1604]: time="2026-03-02T13:31:05.503363664Z" level=info msg="Container 25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:05.507523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464674913.mount: Deactivated successfully. Mar 2 13:31:05.515694 containerd[1604]: time="2026-03-02T13:31:05.515642493Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\"" Mar 2 13:31:05.518316 containerd[1604]: time="2026-03-02T13:31:05.518142156Z" level=info msg="StartContainer for \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\"" Mar 2 13:31:05.520430 containerd[1604]: time="2026-03-02T13:31:05.520387208Z" level=info msg="connecting to shim 25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281" address="unix:///run/containerd/s/74148cb2622275063e51b1f92136c36ef2e50630a7bdaed95b67a6bbd14c34b8" protocol=ttrpc version=3 Mar 2 13:31:05.548415 systemd[1]: Started cri-containerd-25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281.scope - libcontainer container 25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281. Mar 2 13:31:05.599474 containerd[1604]: time="2026-03-02T13:31:05.599382490Z" level=info msg="StartContainer for \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\" returns successfully" Mar 2 13:31:05.622850 systemd[1]: cri-containerd-25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281.scope: Deactivated successfully. Mar 2 13:31:05.688185 containerd[1604]: time="2026-03-02T13:31:05.687859212Z" level=info msg="received container exit event container_id:\"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\" id:\"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\" pid:3299 exited_at:{seconds:1772458265 nanos:628844347}" Mar 2 13:31:05.731321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281-rootfs.mount: Deactivated successfully. Mar 2 13:31:06.643199 containerd[1604]: time="2026-03-02T13:31:06.642309740Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:31:06.661406 containerd[1604]: time="2026-03-02T13:31:06.661351493Z" level=info msg="Container e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:06.673654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070791151.mount: Deactivated successfully. Mar 2 13:31:06.679110 containerd[1604]: time="2026-03-02T13:31:06.678926252Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\"" Mar 2 13:31:06.680758 containerd[1604]: time="2026-03-02T13:31:06.680498127Z" level=info msg="StartContainer for \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\"" Mar 2 13:31:06.681944 containerd[1604]: time="2026-03-02T13:31:06.681856871Z" level=info msg="connecting to shim e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf" address="unix:///run/containerd/s/74148cb2622275063e51b1f92136c36ef2e50630a7bdaed95b67a6bbd14c34b8" protocol=ttrpc version=3 Mar 2 13:31:06.720527 systemd[1]: Started cri-containerd-e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf.scope - libcontainer container e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf. Mar 2 13:31:06.767499 containerd[1604]: time="2026-03-02T13:31:06.767353214Z" level=info msg="StartContainer for \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\" returns successfully" Mar 2 13:31:06.789183 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:31:06.789848 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:31:06.790189 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:31:06.793341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:31:06.799340 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 13:31:06.801340 systemd[1]: cri-containerd-e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf.scope: Deactivated successfully. Mar 2 13:31:06.806569 containerd[1604]: time="2026-03-02T13:31:06.805793700Z" level=info msg="received container exit event container_id:\"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\" id:\"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\" pid:3345 exited_at:{seconds:1772458266 nanos:802463896}" Mar 2 13:31:06.865805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:31:07.672905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf-rootfs.mount: Deactivated successfully. Mar 2 13:31:07.684807 containerd[1604]: time="2026-03-02T13:31:07.684696809Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:31:07.709632 containerd[1604]: time="2026-03-02T13:31:07.709185578Z" level=info msg="Container c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:07.714630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031461990.mount: Deactivated successfully. Mar 2 13:31:07.728200 containerd[1604]: time="2026-03-02T13:31:07.728022736Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\"" Mar 2 13:31:07.729775 containerd[1604]: time="2026-03-02T13:31:07.729739442Z" level=info msg="StartContainer for \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\"" Mar 2 13:31:07.732291 containerd[1604]: time="2026-03-02T13:31:07.732142431Z" level=info msg="connecting to shim c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af" address="unix:///run/containerd/s/74148cb2622275063e51b1f92136c36ef2e50630a7bdaed95b67a6bbd14c34b8" protocol=ttrpc version=3 Mar 2 13:31:07.782367 systemd[1]: Started cri-containerd-c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af.scope - libcontainer container c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af. Mar 2 13:31:07.905658 containerd[1604]: time="2026-03-02T13:31:07.905076480Z" level=info msg="StartContainer for \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\" returns successfully" Mar 2 13:31:07.906730 systemd[1]: cri-containerd-c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af.scope: Deactivated successfully. Mar 2 13:31:07.916190 containerd[1604]: time="2026-03-02T13:31:07.915826995Z" level=info msg="received container exit event container_id:\"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\" id:\"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\" pid:3391 exited_at:{seconds:1772458267 nanos:913871956}" Mar 2 13:31:08.664859 containerd[1604]: time="2026-03-02T13:31:08.664785537Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:31:08.670763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403887473.mount: Deactivated successfully. Mar 2 13:31:08.671598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af-rootfs.mount: Deactivated successfully. Mar 2 13:31:08.696192 containerd[1604]: time="2026-03-02T13:31:08.696122651Z" level=info msg="Container e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:08.717120 containerd[1604]: time="2026-03-02T13:31:08.717065012Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\"" Mar 2 13:31:08.720640 containerd[1604]: time="2026-03-02T13:31:08.720577306Z" level=info msg="StartContainer for \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\"" Mar 2 13:31:08.724452 containerd[1604]: time="2026-03-02T13:31:08.724412156Z" level=info msg="connecting to shim e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2" address="unix:///run/containerd/s/74148cb2622275063e51b1f92136c36ef2e50630a7bdaed95b67a6bbd14c34b8" protocol=ttrpc version=3 Mar 2 13:31:08.771520 systemd[1]: Started cri-containerd-e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2.scope - libcontainer container e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2. Mar 2 13:31:08.833014 systemd[1]: cri-containerd-e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2.scope: Deactivated successfully. Mar 2 13:31:08.835872 containerd[1604]: time="2026-03-02T13:31:08.835817741Z" level=info msg="received container exit event container_id:\"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\" id:\"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\" pid:3439 exited_at:{seconds:1772458268 nanos:835088425}" Mar 2 13:31:08.838578 containerd[1604]: time="2026-03-02T13:31:08.838546315Z" level=info msg="StartContainer for \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\" returns successfully" Mar 2 13:31:08.874948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2-rootfs.mount: Deactivated successfully. Mar 2 13:31:09.672550 containerd[1604]: time="2026-03-02T13:31:09.672416990Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:31:09.702833 containerd[1604]: time="2026-03-02T13:31:09.702585391Z" level=info msg="Container 67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:09.707818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211751712.mount: Deactivated successfully. Mar 2 13:31:09.718548 containerd[1604]: time="2026-03-02T13:31:09.718355802Z" level=info msg="CreateContainer within sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\"" Mar 2 13:31:09.719095 containerd[1604]: time="2026-03-02T13:31:09.719065758Z" level=info msg="StartContainer for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\"" Mar 2 13:31:09.723523 containerd[1604]: time="2026-03-02T13:31:09.723123650Z" level=info msg="connecting to shim 67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43" address="unix:///run/containerd/s/74148cb2622275063e51b1f92136c36ef2e50630a7bdaed95b67a6bbd14c34b8" protocol=ttrpc version=3 Mar 2 13:31:09.768871 systemd[1]: Started cri-containerd-67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43.scope - libcontainer container 67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43. Mar 2 13:31:09.908999 containerd[1604]: time="2026-03-02T13:31:09.908810880Z" level=info msg="StartContainer for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" returns successfully" Mar 2 13:31:10.157830 kubelet[2875]: I0302 13:31:10.157780 2875 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 13:31:10.276505 systemd[1]: Created slice kubepods-burstable-pod0abb67a7_7e6a_401e_a765_e8f5db82ce7c.slice - libcontainer container kubepods-burstable-pod0abb67a7_7e6a_401e_a765_e8f5db82ce7c.slice. Mar 2 13:31:10.289648 systemd[1]: Created slice kubepods-burstable-pod25a8aa93_7940_4892_9d6b_5125510829e1.slice - libcontainer container kubepods-burstable-pod25a8aa93_7940_4892_9d6b_5125510829e1.slice. Mar 2 13:31:10.315733 kubelet[2875]: I0302 13:31:10.315627 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25a8aa93-7940-4892-9d6b-5125510829e1-config-volume\") pod \"coredns-674b8bbfcf-jkcwj\" (UID: \"25a8aa93-7940-4892-9d6b-5125510829e1\") " pod="kube-system/coredns-674b8bbfcf-jkcwj" Mar 2 13:31:10.316117 kubelet[2875]: I0302 13:31:10.315984 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0abb67a7-7e6a-401e-a765-e8f5db82ce7c-config-volume\") pod \"coredns-674b8bbfcf-zqsb5\" (UID: \"0abb67a7-7e6a-401e-a765-e8f5db82ce7c\") " pod="kube-system/coredns-674b8bbfcf-zqsb5" Mar 2 13:31:10.316117 kubelet[2875]: I0302 13:31:10.316083 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mjq\" (UniqueName: \"kubernetes.io/projected/0abb67a7-7e6a-401e-a765-e8f5db82ce7c-kube-api-access-m9mjq\") pod \"coredns-674b8bbfcf-zqsb5\" (UID: \"0abb67a7-7e6a-401e-a765-e8f5db82ce7c\") " pod="kube-system/coredns-674b8bbfcf-zqsb5" Mar 2 13:31:10.316572 kubelet[2875]: I0302 13:31:10.316394 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdlrf\" (UniqueName: \"kubernetes.io/projected/25a8aa93-7940-4892-9d6b-5125510829e1-kube-api-access-wdlrf\") pod \"coredns-674b8bbfcf-jkcwj\" (UID: \"25a8aa93-7940-4892-9d6b-5125510829e1\") " pod="kube-system/coredns-674b8bbfcf-jkcwj" Mar 2 13:31:10.595522 containerd[1604]: time="2026-03-02T13:31:10.595430382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkcwj,Uid:25a8aa93-7940-4892-9d6b-5125510829e1,Namespace:kube-system,Attempt:0,}" Mar 2 13:31:10.602448 containerd[1604]: time="2026-03-02T13:31:10.602331098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zqsb5,Uid:0abb67a7-7e6a-401e-a765-e8f5db82ce7c,Namespace:kube-system,Attempt:0,}" Mar 2 13:31:10.876142 kubelet[2875]: I0302 13:31:10.875762 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vgln8" podStartSLOduration=7.041141004 podStartE2EDuration="19.875066505s" podCreationTimestamp="2026-03-02 13:30:51 +0000 UTC" firstStartedPulling="2026-03-02 13:30:52.579075849 +0000 UTC m=+5.384347139" lastFinishedPulling="2026-03-02 13:31:05.413001344 +0000 UTC m=+18.218272640" observedRunningTime="2026-03-02 13:31:10.868853974 +0000 UTC m=+23.674125320" watchObservedRunningTime="2026-03-02 13:31:10.875066505 +0000 UTC m=+23.680337812" Mar 2 13:31:11.382025 containerd[1604]: time="2026-03-02T13:31:11.381951156Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:11.383537 containerd[1604]: time="2026-03-02T13:31:11.383263642Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:31:11.384363 containerd[1604]: time="2026-03-02T13:31:11.384311707Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:11.386556 containerd[1604]: time="2026-03-02T13:31:11.386505976Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.972552718s" Mar 2 13:31:11.386664 containerd[1604]: time="2026-03-02T13:31:11.386559518Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:31:11.393760 containerd[1604]: time="2026-03-02T13:31:11.393647440Z" level=info msg="CreateContainer within sandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:31:11.406365 containerd[1604]: time="2026-03-02T13:31:11.405314069Z" level=info msg="Container bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:11.413837 containerd[1604]: time="2026-03-02T13:31:11.413785481Z" level=info msg="CreateContainer within sandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\"" Mar 2 13:31:11.415384 containerd[1604]: time="2026-03-02T13:31:11.415286168Z" level=info msg="StartContainer for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\"" Mar 2 13:31:11.416716 containerd[1604]: time="2026-03-02T13:31:11.416636068Z" level=info msg="connecting to shim bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df" address="unix:///run/containerd/s/c40e27a68e486798d54b8e408f390606f98a0e1cbbd6296b5ee096e1fa2eec0f" protocol=ttrpc version=3 Mar 2 13:31:11.458477 systemd[1]: Started cri-containerd-bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df.scope - libcontainer container bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df. Mar 2 13:31:11.526946 containerd[1604]: time="2026-03-02T13:31:11.526890149Z" level=info msg="StartContainer for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" returns successfully" Mar 2 13:31:15.009672 systemd-networkd[1503]: cilium_host: Link UP Mar 2 13:31:15.010124 systemd-networkd[1503]: cilium_net: Link UP Mar 2 13:31:15.011011 systemd-networkd[1503]: cilium_net: Gained carrier Mar 2 13:31:15.013113 systemd-networkd[1503]: cilium_host: Gained carrier Mar 2 13:31:15.188059 systemd-networkd[1503]: cilium_vxlan: Link UP Mar 2 13:31:15.188074 systemd-networkd[1503]: cilium_vxlan: Gained carrier Mar 2 13:31:15.202342 systemd-networkd[1503]: cilium_net: Gained IPv6LL Mar 2 13:31:15.514526 systemd-networkd[1503]: cilium_host: Gained IPv6LL Mar 2 13:31:15.849361 kernel: NET: Registered PF_ALG protocol family Mar 2 13:31:16.603304 systemd-networkd[1503]: cilium_vxlan: Gained IPv6LL Mar 2 13:31:16.955215 systemd-networkd[1503]: lxc_health: Link UP Mar 2 13:31:16.958574 systemd-networkd[1503]: lxc_health: Gained carrier Mar 2 13:31:17.240296 kernel: eth0: renamed from tmp69327 Mar 2 13:31:17.266515 systemd-networkd[1503]: lxc3d7f78b9d533: Link UP Mar 2 13:31:17.270845 systemd-networkd[1503]: lxc3d7f78b9d533: Gained carrier Mar 2 13:31:17.291335 systemd-networkd[1503]: lxc1aed451a45c4: Link UP Mar 2 13:31:17.301204 kernel: eth0: renamed from tmp5ae6e Mar 2 13:31:17.305375 systemd-networkd[1503]: lxc1aed451a45c4: Gained carrier Mar 2 13:31:18.406054 kubelet[2875]: I0302 13:31:18.405441 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dtwcb" podStartSLOduration=7.788392897 podStartE2EDuration="26.405417716s" podCreationTimestamp="2026-03-02 13:30:52 +0000 UTC" firstStartedPulling="2026-03-02 13:30:52.770787547 +0000 UTC m=+5.576058837" lastFinishedPulling="2026-03-02 13:31:11.387812368 +0000 UTC m=+24.193083656" observedRunningTime="2026-03-02 13:31:11.91135169 +0000 UTC m=+24.716623010" watchObservedRunningTime="2026-03-02 13:31:18.405417716 +0000 UTC m=+31.210689013" Mar 2 13:31:18.458536 systemd-networkd[1503]: lxc1aed451a45c4: Gained IPv6LL Mar 2 13:31:18.843237 systemd-networkd[1503]: lxc_health: Gained IPv6LL Mar 2 13:31:19.226539 systemd-networkd[1503]: lxc3d7f78b9d533: Gained IPv6LL Mar 2 13:31:23.477277 containerd[1604]: time="2026-03-02T13:31:23.476005676Z" level=info msg="connecting to shim 69327f4b765ab2ba893148428162af9cde8d1973ec6a721bf3daf672398485ad" address="unix:///run/containerd/s/a01d61cb0ab18f127ae61bf09098c493a81b4234a03bf6824f4601d79833fa81" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:31:23.488116 containerd[1604]: time="2026-03-02T13:31:23.487999100Z" level=info msg="connecting to shim 5ae6ec10e7c102cb93bc55b80577544e8ed071e869280b5b411a800308d3d8c4" address="unix:///run/containerd/s/f024a45b62d922e10e1ddc444bf7310abfe3ba7330290dacb2dba0b80334a826" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:31:23.549411 systemd[1]: Started cri-containerd-69327f4b765ab2ba893148428162af9cde8d1973ec6a721bf3daf672398485ad.scope - libcontainer container 69327f4b765ab2ba893148428162af9cde8d1973ec6a721bf3daf672398485ad. Mar 2 13:31:23.583401 systemd[1]: Started cri-containerd-5ae6ec10e7c102cb93bc55b80577544e8ed071e869280b5b411a800308d3d8c4.scope - libcontainer container 5ae6ec10e7c102cb93bc55b80577544e8ed071e869280b5b411a800308d3d8c4. Mar 2 13:31:23.717609 containerd[1604]: time="2026-03-02T13:31:23.717521816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zqsb5,Uid:0abb67a7-7e6a-401e-a765-e8f5db82ce7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae6ec10e7c102cb93bc55b80577544e8ed071e869280b5b411a800308d3d8c4\"" Mar 2 13:31:23.730026 containerd[1604]: time="2026-03-02T13:31:23.729580329Z" level=info msg="CreateContainer within sandbox \"5ae6ec10e7c102cb93bc55b80577544e8ed071e869280b5b411a800308d3d8c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:31:23.744566 containerd[1604]: time="2026-03-02T13:31:23.744507998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkcwj,Uid:25a8aa93-7940-4892-9d6b-5125510829e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"69327f4b765ab2ba893148428162af9cde8d1973ec6a721bf3daf672398485ad\"" Mar 2 13:31:23.776695 containerd[1604]: time="2026-03-02T13:31:23.776621468Z" level=info msg="CreateContainer within sandbox \"69327f4b765ab2ba893148428162af9cde8d1973ec6a721bf3daf672398485ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:31:23.796807 containerd[1604]: time="2026-03-02T13:31:23.796361367Z" level=info msg="Container 8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:23.808155 containerd[1604]: time="2026-03-02T13:31:23.808061984Z" level=info msg="Container 32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:31:23.810022 containerd[1604]: time="2026-03-02T13:31:23.809844392Z" level=info msg="CreateContainer within sandbox \"5ae6ec10e7c102cb93bc55b80577544e8ed071e869280b5b411a800308d3d8c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a\"" Mar 2 13:31:23.811789 containerd[1604]: time="2026-03-02T13:31:23.811757380Z" level=info msg="StartContainer for \"8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a\"" Mar 2 13:31:23.813518 containerd[1604]: time="2026-03-02T13:31:23.813485209Z" level=info msg="connecting to shim 8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a" address="unix:///run/containerd/s/f024a45b62d922e10e1ddc444bf7310abfe3ba7330290dacb2dba0b80334a826" protocol=ttrpc version=3 Mar 2 13:31:23.823769 containerd[1604]: time="2026-03-02T13:31:23.823728860Z" level=info msg="CreateContainer within sandbox \"69327f4b765ab2ba893148428162af9cde8d1973ec6a721bf3daf672398485ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a\"" Mar 2 13:31:23.825193 containerd[1604]: time="2026-03-02T13:31:23.824994156Z" level=info msg="StartContainer for \"32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a\"" Mar 2 13:31:23.828030 containerd[1604]: time="2026-03-02T13:31:23.827273556Z" level=info msg="connecting to shim 32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a" address="unix:///run/containerd/s/a01d61cb0ab18f127ae61bf09098c493a81b4234a03bf6824f4601d79833fa81" protocol=ttrpc version=3 Mar 2 13:31:23.860735 systemd[1]: Started cri-containerd-8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a.scope - libcontainer container 8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a. Mar 2 13:31:23.876546 systemd[1]: Started cri-containerd-32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a.scope - libcontainer container 32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a. Mar 2 13:31:23.959607 containerd[1604]: time="2026-03-02T13:31:23.959413890Z" level=info msg="StartContainer for \"8ca82f2b38ab184aa58e58317a737c7179b57f71772a181126109a436db0f59a\" returns successfully" Mar 2 13:31:23.967303 containerd[1604]: time="2026-03-02T13:31:23.967246565Z" level=info msg="StartContainer for \"32ccbc7bdd4a6113d7975cb70df443c360c7ae90b591bbaceadeb095766f465a\" returns successfully" Mar 2 13:31:24.447918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242736882.mount: Deactivated successfully. Mar 2 13:31:24.933776 kubelet[2875]: I0302 13:31:24.933623 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jkcwj" podStartSLOduration=32.933600357 podStartE2EDuration="32.933600357s" podCreationTimestamp="2026-03-02 13:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:31:24.929823264 +0000 UTC m=+37.735094574" watchObservedRunningTime="2026-03-02 13:31:24.933600357 +0000 UTC m=+37.738871648" Mar 2 13:31:24.980100 kubelet[2875]: I0302 13:31:24.979759 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zqsb5" podStartSLOduration=32.979730345 podStartE2EDuration="32.979730345s" podCreationTimestamp="2026-03-02 13:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:31:24.954200168 +0000 UTC m=+37.759471478" watchObservedRunningTime="2026-03-02 13:31:24.979730345 +0000 UTC m=+37.785001654" Mar 2 13:31:52.219844 systemd[1]: Started sshd@8-10.230.30.118:22-68.220.241.50:43500.service - OpenSSH per-connection server daemon (68.220.241.50:43500). Mar 2 13:31:52.789593 sshd[4203]: Accepted publickey for core from 68.220.241.50 port 43500 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:31:52.792809 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:31:52.810312 systemd-logind[1545]: New session 10 of user core. Mar 2 13:31:52.819695 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:31:53.658032 sshd[4208]: Connection closed by 68.220.241.50 port 43500 Mar 2 13:31:53.659107 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Mar 2 13:31:53.673788 systemd[1]: sshd@8-10.230.30.118:22-68.220.241.50:43500.service: Deactivated successfully. Mar 2 13:31:53.683356 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:31:53.684863 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:31:53.687858 systemd-logind[1545]: Removed session 10. Mar 2 13:31:59.074950 systemd[1]: Started sshd@9-10.230.30.118:22-68.220.241.50:43508.service - OpenSSH per-connection server daemon (68.220.241.50:43508). Mar 2 13:31:59.604016 sshd[4227]: Accepted publickey for core from 68.220.241.50 port 43508 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:31:59.606820 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:31:59.614456 systemd-logind[1545]: New session 11 of user core. Mar 2 13:31:59.624490 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:32:00.003197 sshd[4230]: Connection closed by 68.220.241.50 port 43508 Mar 2 13:32:00.004555 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:00.011698 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:32:00.013173 systemd[1]: sshd@9-10.230.30.118:22-68.220.241.50:43508.service: Deactivated successfully. Mar 2 13:32:00.015749 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:32:00.017861 systemd-logind[1545]: Removed session 11. Mar 2 13:32:05.110249 systemd[1]: Started sshd@10-10.230.30.118:22-68.220.241.50:33662.service - OpenSSH per-connection server daemon (68.220.241.50:33662). Mar 2 13:32:05.626334 sshd[4243]: Accepted publickey for core from 68.220.241.50 port 33662 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:05.628585 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:05.638250 systemd-logind[1545]: New session 12 of user core. Mar 2 13:32:05.648422 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:32:06.013910 sshd[4246]: Connection closed by 68.220.241.50 port 33662 Mar 2 13:32:06.015033 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:06.023919 systemd[1]: sshd@10-10.230.30.118:22-68.220.241.50:33662.service: Deactivated successfully. Mar 2 13:32:06.026424 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:32:06.030984 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:32:06.037008 systemd-logind[1545]: Removed session 12. Mar 2 13:32:11.132463 systemd[1]: Started sshd@11-10.230.30.118:22-68.220.241.50:33678.service - OpenSSH per-connection server daemon (68.220.241.50:33678). Mar 2 13:32:11.654846 sshd[4259]: Accepted publickey for core from 68.220.241.50 port 33678 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:11.656769 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:11.666350 systemd-logind[1545]: New session 13 of user core. Mar 2 13:32:11.674434 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:32:12.048306 sshd[4262]: Connection closed by 68.220.241.50 port 33678 Mar 2 13:32:12.049470 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:12.057366 systemd[1]: sshd@11-10.230.30.118:22-68.220.241.50:33678.service: Deactivated successfully. Mar 2 13:32:12.060978 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:32:12.063266 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:32:12.065892 systemd-logind[1545]: Removed session 13. Mar 2 13:32:12.168319 systemd[1]: Started sshd@12-10.230.30.118:22-68.220.241.50:33692.service - OpenSSH per-connection server daemon (68.220.241.50:33692). Mar 2 13:32:12.693907 sshd[4275]: Accepted publickey for core from 68.220.241.50 port 33692 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:12.696105 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:12.703746 systemd-logind[1545]: New session 14 of user core. Mar 2 13:32:12.716585 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:32:13.153382 sshd[4278]: Connection closed by 68.220.241.50 port 33692 Mar 2 13:32:13.154468 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:13.160764 systemd[1]: sshd@12-10.230.30.118:22-68.220.241.50:33692.service: Deactivated successfully. Mar 2 13:32:13.161648 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:32:13.164750 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:32:13.168218 systemd-logind[1545]: Removed session 14. Mar 2 13:32:13.250539 systemd[1]: Started sshd@13-10.230.30.118:22-68.220.241.50:41092.service - OpenSSH per-connection server daemon (68.220.241.50:41092). Mar 2 13:32:13.756891 sshd[4289]: Accepted publickey for core from 68.220.241.50 port 41092 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:13.758966 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:13.767244 systemd-logind[1545]: New session 15 of user core. Mar 2 13:32:13.773458 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:32:14.168370 sshd[4292]: Connection closed by 68.220.241.50 port 41092 Mar 2 13:32:14.168114 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:14.176873 systemd[1]: sshd@13-10.230.30.118:22-68.220.241.50:41092.service: Deactivated successfully. Mar 2 13:32:14.180573 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:32:14.183097 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:32:14.185075 systemd-logind[1545]: Removed session 15. Mar 2 13:32:19.273199 systemd[1]: Started sshd@14-10.230.30.118:22-68.220.241.50:41104.service - OpenSSH per-connection server daemon (68.220.241.50:41104). Mar 2 13:32:19.803885 sshd[4304]: Accepted publickey for core from 68.220.241.50 port 41104 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:19.806674 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:19.814126 systemd-logind[1545]: New session 16 of user core. Mar 2 13:32:19.822492 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:32:20.198333 sshd[4307]: Connection closed by 68.220.241.50 port 41104 Mar 2 13:32:20.197994 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:20.206842 systemd[1]: sshd@14-10.230.30.118:22-68.220.241.50:41104.service: Deactivated successfully. Mar 2 13:32:20.210117 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:32:20.214453 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:32:20.216418 systemd-logind[1545]: Removed session 16. Mar 2 13:32:25.305124 systemd[1]: Started sshd@15-10.230.30.118:22-68.220.241.50:57976.service - OpenSSH per-connection server daemon (68.220.241.50:57976). Mar 2 13:32:25.824225 sshd[4321]: Accepted publickey for core from 68.220.241.50 port 57976 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:25.825244 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:25.833148 systemd-logind[1545]: New session 17 of user core. Mar 2 13:32:25.842795 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:32:26.217267 sshd[4324]: Connection closed by 68.220.241.50 port 57976 Mar 2 13:32:26.216289 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:26.223398 systemd[1]: sshd@15-10.230.30.118:22-68.220.241.50:57976.service: Deactivated successfully. Mar 2 13:32:26.225932 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:32:26.227388 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:32:26.229951 systemd-logind[1545]: Removed session 17. Mar 2 13:32:26.323775 systemd[1]: Started sshd@16-10.230.30.118:22-68.220.241.50:57992.service - OpenSSH per-connection server daemon (68.220.241.50:57992). Mar 2 13:32:26.841243 sshd[4337]: Accepted publickey for core from 68.220.241.50 port 57992 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:26.843451 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:26.852178 systemd-logind[1545]: New session 18 of user core. Mar 2 13:32:26.857421 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:32:27.682212 sshd[4340]: Connection closed by 68.220.241.50 port 57992 Mar 2 13:32:27.683012 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:27.690922 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:32:27.691386 systemd[1]: sshd@16-10.230.30.118:22-68.220.241.50:57992.service: Deactivated successfully. Mar 2 13:32:27.694809 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:32:27.698353 systemd-logind[1545]: Removed session 18. Mar 2 13:32:27.783877 systemd[1]: Started sshd@17-10.230.30.118:22-68.220.241.50:58002.service - OpenSSH per-connection server daemon (68.220.241.50:58002). Mar 2 13:32:28.306855 sshd[4350]: Accepted publickey for core from 68.220.241.50 port 58002 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:28.308836 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:28.316890 systemd-logind[1545]: New session 19 of user core. Mar 2 13:32:28.324450 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:32:29.451185 sshd[4353]: Connection closed by 68.220.241.50 port 58002 Mar 2 13:32:29.452272 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:29.458986 systemd[1]: sshd@17-10.230.30.118:22-68.220.241.50:58002.service: Deactivated successfully. Mar 2 13:32:29.462103 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:32:29.464382 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:32:29.466768 systemd-logind[1545]: Removed session 19. Mar 2 13:32:29.562984 systemd[1]: Started sshd@18-10.230.30.118:22-68.220.241.50:58010.service - OpenSSH per-connection server daemon (68.220.241.50:58010). Mar 2 13:32:30.088036 sshd[4370]: Accepted publickey for core from 68.220.241.50 port 58010 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:30.090741 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:30.098954 systemd-logind[1545]: New session 20 of user core. Mar 2 13:32:30.108908 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:32:30.665282 sshd[4373]: Connection closed by 68.220.241.50 port 58010 Mar 2 13:32:30.665787 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:30.672645 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:32:30.672883 systemd[1]: sshd@18-10.230.30.118:22-68.220.241.50:58010.service: Deactivated successfully. Mar 2 13:32:30.675974 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:32:30.679816 systemd-logind[1545]: Removed session 20. Mar 2 13:32:30.772122 systemd[1]: Started sshd@19-10.230.30.118:22-68.220.241.50:58026.service - OpenSSH per-connection server daemon (68.220.241.50:58026). Mar 2 13:32:31.336330 sshd[4383]: Accepted publickey for core from 68.220.241.50 port 58026 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:31.338474 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:31.345629 systemd-logind[1545]: New session 21 of user core. Mar 2 13:32:31.355441 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:32:31.733567 sshd[4386]: Connection closed by 68.220.241.50 port 58026 Mar 2 13:32:31.734523 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:31.742318 systemd[1]: sshd@19-10.230.30.118:22-68.220.241.50:58026.service: Deactivated successfully. Mar 2 13:32:31.747771 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:32:31.749797 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:32:31.752884 systemd-logind[1545]: Removed session 21. Mar 2 13:32:36.843579 systemd[1]: Started sshd@20-10.230.30.118:22-68.220.241.50:45284.service - OpenSSH per-connection server daemon (68.220.241.50:45284). Mar 2 13:32:37.424324 sshd[4399]: Accepted publickey for core from 68.220.241.50 port 45284 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:37.426587 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:37.436847 systemd-logind[1545]: New session 22 of user core. Mar 2 13:32:37.445520 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:32:37.808783 sshd[4402]: Connection closed by 68.220.241.50 port 45284 Mar 2 13:32:37.809984 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:37.816980 systemd[1]: sshd@20-10.230.30.118:22-68.220.241.50:45284.service: Deactivated successfully. Mar 2 13:32:37.820154 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:32:37.821608 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:32:37.824136 systemd-logind[1545]: Removed session 22. Mar 2 13:32:42.916755 systemd[1]: Started sshd@21-10.230.30.118:22-68.220.241.50:60474.service - OpenSSH per-connection server daemon (68.220.241.50:60474). Mar 2 13:32:43.437976 sshd[4414]: Accepted publickey for core from 68.220.241.50 port 60474 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:43.440082 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:43.447915 systemd-logind[1545]: New session 23 of user core. Mar 2 13:32:43.459710 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:32:43.817371 sshd[4417]: Connection closed by 68.220.241.50 port 60474 Mar 2 13:32:43.818506 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:43.824057 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:32:43.824352 systemd[1]: sshd@21-10.230.30.118:22-68.220.241.50:60474.service: Deactivated successfully. Mar 2 13:32:43.827466 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:32:43.831532 systemd-logind[1545]: Removed session 23. Mar 2 13:32:43.928784 systemd[1]: Started sshd@22-10.230.30.118:22-68.220.241.50:60480.service - OpenSSH per-connection server daemon (68.220.241.50:60480). Mar 2 13:32:44.456339 sshd[4429]: Accepted publickey for core from 68.220.241.50 port 60480 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:44.458566 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:44.466996 systemd-logind[1545]: New session 24 of user core. Mar 2 13:32:44.477423 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:32:45.091479 systemd[1]: Started sshd@23-10.230.30.118:22-103.148.100.146:52600.service - OpenSSH per-connection server daemon (103.148.100.146:52600). Mar 2 13:32:46.289821 containerd[1604]: time="2026-03-02T13:32:46.289727107Z" level=info msg="StopContainer for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" with timeout 30 (s)" Mar 2 13:32:46.302364 containerd[1604]: time="2026-03-02T13:32:46.302296788Z" level=info msg="Stop container \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" with signal terminated" Mar 2 13:32:46.321604 systemd[1]: cri-containerd-bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df.scope: Deactivated successfully. Mar 2 13:32:46.327654 containerd[1604]: time="2026-03-02T13:32:46.327583935Z" level=info msg="received container exit event container_id:\"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" id:\"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" pid:3628 exited_at:{seconds:1772458366 nanos:323956008}" Mar 2 13:32:46.364050 sshd[4441]: Received disconnect from 103.148.100.146 port 52600:11: Bye Bye [preauth] Mar 2 13:32:46.364050 sshd[4441]: Disconnected from authenticating user root 103.148.100.146 port 52600 [preauth] Mar 2 13:32:46.372792 containerd[1604]: time="2026-03-02T13:32:46.372731343Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:32:46.373087 systemd[1]: sshd@23-10.230.30.118:22-103.148.100.146:52600.service: Deactivated successfully. Mar 2 13:32:46.382436 containerd[1604]: time="2026-03-02T13:32:46.382370227Z" level=info msg="StopContainer for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" with timeout 2 (s)" Mar 2 13:32:46.384130 containerd[1604]: time="2026-03-02T13:32:46.383931997Z" level=info msg="Stop container \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" with signal terminated" Mar 2 13:32:46.404868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df-rootfs.mount: Deactivated successfully. Mar 2 13:32:46.411690 containerd[1604]: time="2026-03-02T13:32:46.411507246Z" level=info msg="StopContainer for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" returns successfully" Mar 2 13:32:46.413214 containerd[1604]: time="2026-03-02T13:32:46.413072062Z" level=info msg="StopPodSandbox for \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\"" Mar 2 13:32:46.423911 containerd[1604]: time="2026-03-02T13:32:46.423797681Z" level=info msg="Container to stop \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:32:46.428873 systemd-networkd[1503]: lxc_health: Link DOWN Mar 2 13:32:46.428886 systemd-networkd[1503]: lxc_health: Lost carrier Mar 2 13:32:46.454112 systemd[1]: cri-containerd-67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43.scope: Deactivated successfully. Mar 2 13:32:46.456064 systemd[1]: cri-containerd-67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43.scope: Consumed 10.696s CPU time, 197.4M memory peak, 73.2M read from disk, 13.3M written to disk. Mar 2 13:32:46.459599 systemd[1]: cri-containerd-1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a.scope: Deactivated successfully. Mar 2 13:32:46.464987 containerd[1604]: time="2026-03-02T13:32:46.464789175Z" level=info msg="received container exit event container_id:\"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" id:\"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" pid:3480 exited_at:{seconds:1772458366 nanos:463673429}" Mar 2 13:32:46.467971 containerd[1604]: time="2026-03-02T13:32:46.467891126Z" level=info msg="received sandbox exit event container_id:\"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" id:\"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" exit_status:137 exited_at:{seconds:1772458366 nanos:467358842}" monitor_name=podsandbox Mar 2 13:32:46.505835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43-rootfs.mount: Deactivated successfully. Mar 2 13:32:46.515847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a-rootfs.mount: Deactivated successfully. Mar 2 13:32:46.519219 containerd[1604]: time="2026-03-02T13:32:46.519131877Z" level=info msg="shim disconnected" id=1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a namespace=k8s.io Mar 2 13:32:46.519422 containerd[1604]: time="2026-03-02T13:32:46.519387731Z" level=warning msg="cleaning up after shim disconnected" id=1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a namespace=k8s.io Mar 2 13:32:46.527854 containerd[1604]: time="2026-03-02T13:32:46.519504484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:32:46.535016 containerd[1604]: time="2026-03-02T13:32:46.534490993Z" level=info msg="StopContainer for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" returns successfully" Mar 2 13:32:46.535411 containerd[1604]: time="2026-03-02T13:32:46.535362139Z" level=info msg="StopPodSandbox for \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\"" Mar 2 13:32:46.536490 containerd[1604]: time="2026-03-02T13:32:46.535750045Z" level=info msg="Container to stop \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:32:46.536490 containerd[1604]: time="2026-03-02T13:32:46.535783608Z" level=info msg="Container to stop \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:32:46.536490 containerd[1604]: time="2026-03-02T13:32:46.535799973Z" level=info msg="Container to stop \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:32:46.536490 containerd[1604]: time="2026-03-02T13:32:46.535922647Z" level=info msg="Container to stop \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:32:46.536490 containerd[1604]: time="2026-03-02T13:32:46.535943765Z" level=info msg="Container to stop \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:32:46.553542 systemd[1]: cri-containerd-e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6.scope: Deactivated successfully. Mar 2 13:32:46.565075 containerd[1604]: time="2026-03-02T13:32:46.564972468Z" level=info msg="received sandbox exit event container_id:\"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" id:\"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" exit_status:137 exited_at:{seconds:1772458366 nanos:561582149}" monitor_name=podsandbox Mar 2 13:32:46.587944 containerd[1604]: time="2026-03-02T13:32:46.587477836Z" level=info msg="received sandbox container exit event sandbox_id:\"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" exit_status:137 exited_at:{seconds:1772458366 nanos:467358842}" monitor_name=criService Mar 2 13:32:46.590493 containerd[1604]: time="2026-03-02T13:32:46.590436630Z" level=info msg="TearDown network for sandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" successfully" Mar 2 13:32:46.590493 containerd[1604]: time="2026-03-02T13:32:46.590491861Z" level=info msg="StopPodSandbox for \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" returns successfully" Mar 2 13:32:46.593705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a-shm.mount: Deactivated successfully. Mar 2 13:32:46.632595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6-rootfs.mount: Deactivated successfully. Mar 2 13:32:46.641291 containerd[1604]: time="2026-03-02T13:32:46.640707206Z" level=info msg="shim disconnected" id=e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6 namespace=k8s.io Mar 2 13:32:46.641291 containerd[1604]: time="2026-03-02T13:32:46.640753802Z" level=warning msg="cleaning up after shim disconnected" id=e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6 namespace=k8s.io Mar 2 13:32:46.641291 containerd[1604]: time="2026-03-02T13:32:46.640768957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:32:46.667782 containerd[1604]: time="2026-03-02T13:32:46.667566468Z" level=info msg="received sandbox container exit event sandbox_id:\"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" exit_status:137 exited_at:{seconds:1772458366 nanos:561582149}" monitor_name=criService Mar 2 13:32:46.668178 containerd[1604]: time="2026-03-02T13:32:46.668120552Z" level=info msg="TearDown network for sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" successfully" Mar 2 13:32:46.668252 containerd[1604]: time="2026-03-02T13:32:46.668187173Z" level=info msg="StopPodSandbox for \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" returns successfully" Mar 2 13:32:46.734521 kubelet[2875]: I0302 13:32:46.734434 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-cilium-config-path\") pod \"956d1a8f-c8bd-4ef2-abb7-6e2e674444af\" (UID: \"956d1a8f-c8bd-4ef2-abb7-6e2e674444af\") " Mar 2 13:32:46.734521 kubelet[2875]: I0302 13:32:46.734533 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpw5j\" (UniqueName: \"kubernetes.io/projected/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-kube-api-access-kpw5j\") pod \"956d1a8f-c8bd-4ef2-abb7-6e2e674444af\" (UID: \"956d1a8f-c8bd-4ef2-abb7-6e2e674444af\") " Mar 2 13:32:46.741243 kubelet[2875]: I0302 13:32:46.740673 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "956d1a8f-c8bd-4ef2-abb7-6e2e674444af" (UID: "956d1a8f-c8bd-4ef2-abb7-6e2e674444af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:32:46.745083 kubelet[2875]: I0302 13:32:46.745043 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-kube-api-access-kpw5j" (OuterVolumeSpecName: "kube-api-access-kpw5j") pod "956d1a8f-c8bd-4ef2-abb7-6e2e674444af" (UID: "956d1a8f-c8bd-4ef2-abb7-6e2e674444af"). InnerVolumeSpecName "kube-api-access-kpw5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:32:46.835897 kubelet[2875]: I0302 13:32:46.835697 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-net\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.835897 kubelet[2875]: I0302 13:32:46.835764 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlq99\" (UniqueName: \"kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-kube-api-access-hlq99\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.835897 kubelet[2875]: I0302 13:32:46.835791 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-xtables-lock\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.835897 kubelet[2875]: I0302 13:32:46.835830 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-config-path\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.835897 kubelet[2875]: I0302 13:32:46.835900 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-run\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836361 kubelet[2875]: I0302 13:32:46.835928 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-kernel\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836361 kubelet[2875]: I0302 13:32:46.835965 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cni-path\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836361 kubelet[2875]: I0302 13:32:46.835988 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-bpf-maps\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836361 kubelet[2875]: I0302 13:32:46.836019 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/534e8999-18ef-49e2-8e65-87338c77e12c-clustermesh-secrets\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836361 kubelet[2875]: I0302 13:32:46.836045 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-hostproc\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836361 kubelet[2875]: I0302 13:32:46.836068 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-lib-modules\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836663 kubelet[2875]: I0302 13:32:46.836093 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-etc-cni-netd\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836663 kubelet[2875]: I0302 13:32:46.836119 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-hubble-tls\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836663 kubelet[2875]: I0302 13:32:46.836145 2875 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-cgroup\") pod \"534e8999-18ef-49e2-8e65-87338c77e12c\" (UID: \"534e8999-18ef-49e2-8e65-87338c77e12c\") " Mar 2 13:32:46.836663 kubelet[2875]: I0302 13:32:46.836240 2875 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kpw5j\" (UniqueName: \"kubernetes.io/projected/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-kube-api-access-kpw5j\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.836663 kubelet[2875]: I0302 13:32:46.836263 2875 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/956d1a8f-c8bd-4ef2-abb7-6e2e674444af-cilium-config-path\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.836663 kubelet[2875]: I0302 13:32:46.836343 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.836920 kubelet[2875]: I0302 13:32:46.836416 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.840191 kubelet[2875]: I0302 13:32:46.837872 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.840191 kubelet[2875]: I0302 13:32:46.839264 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.840581 kubelet[2875]: I0302 13:32:46.840524 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-hostproc" (OuterVolumeSpecName: "hostproc") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.840653 kubelet[2875]: I0302 13:32:46.840595 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.840653 kubelet[2875]: I0302 13:32:46.840624 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.840875 kubelet[2875]: I0302 13:32:46.840830 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.841047 kubelet[2875]: I0302 13:32:46.841007 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.842213 kubelet[2875]: I0302 13:32:46.841194 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cni-path" (OuterVolumeSpecName: "cni-path") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:32:46.843334 kubelet[2875]: I0302 13:32:46.843299 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-kube-api-access-hlq99" (OuterVolumeSpecName: "kube-api-access-hlq99") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "kube-api-access-hlq99". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:32:46.849417 kubelet[2875]: I0302 13:32:46.849370 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:32:46.850654 kubelet[2875]: I0302 13:32:46.850623 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534e8999-18ef-49e2-8e65-87338c77e12c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:32:46.852585 kubelet[2875]: I0302 13:32:46.852029 2875 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "534e8999-18ef-49e2-8e65-87338c77e12c" (UID: "534e8999-18ef-49e2-8e65-87338c77e12c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:32:46.937299 kubelet[2875]: I0302 13:32:46.937228 2875 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cni-path\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937500 2875 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-bpf-maps\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937522 2875 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/534e8999-18ef-49e2-8e65-87338c77e12c-clustermesh-secrets\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937544 2875 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-hostproc\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937579 2875 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-lib-modules\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937600 2875 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-etc-cni-netd\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937614 2875 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-hubble-tls\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937629 2875 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-cgroup\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.937923 kubelet[2875]: I0302 13:32:46.937648 2875 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-net\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.938393 kubelet[2875]: I0302 13:32:46.937664 2875 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlq99\" (UniqueName: \"kubernetes.io/projected/534e8999-18ef-49e2-8e65-87338c77e12c-kube-api-access-hlq99\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.938393 kubelet[2875]: I0302 13:32:46.937681 2875 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-xtables-lock\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.938393 kubelet[2875]: I0302 13:32:46.937696 2875 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-config-path\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.938393 kubelet[2875]: I0302 13:32:46.937715 2875 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-cilium-run\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:46.938393 kubelet[2875]: I0302 13:32:46.937731 2875 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/534e8999-18ef-49e2-8e65-87338c77e12c-host-proc-sys-kernel\") on node \"srv-u4d8l.gb1.brightbox.com\" DevicePath \"\"" Mar 2 13:32:47.173806 kubelet[2875]: I0302 13:32:47.173532 2875 scope.go:117] "RemoveContainer" containerID="67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43" Mar 2 13:32:47.184181 containerd[1604]: time="2026-03-02T13:32:47.183405218Z" level=info msg="RemoveContainer for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\"" Mar 2 13:32:47.196701 systemd[1]: Removed slice kubepods-besteffort-pod956d1a8f_c8bd_4ef2_abb7_6e2e674444af.slice - libcontainer container kubepods-besteffort-pod956d1a8f_c8bd_4ef2_abb7_6e2e674444af.slice. Mar 2 13:32:47.200844 systemd[1]: Removed slice kubepods-burstable-pod534e8999_18ef_49e2_8e65_87338c77e12c.slice - libcontainer container kubepods-burstable-pod534e8999_18ef_49e2_8e65_87338c77e12c.slice. Mar 2 13:32:47.200992 systemd[1]: kubepods-burstable-pod534e8999_18ef_49e2_8e65_87338c77e12c.slice: Consumed 10.862s CPU time, 197.8M memory peak, 73.3M read from disk, 13.3M written to disk. Mar 2 13:32:47.208715 containerd[1604]: time="2026-03-02T13:32:47.208412414Z" level=info msg="RemoveContainer for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" returns successfully" Mar 2 13:32:47.209480 kubelet[2875]: I0302 13:32:47.209385 2875 scope.go:117] "RemoveContainer" containerID="e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2" Mar 2 13:32:47.211686 containerd[1604]: time="2026-03-02T13:32:47.211576400Z" level=info msg="RemoveContainer for \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\"" Mar 2 13:32:47.232544 containerd[1604]: time="2026-03-02T13:32:47.232440338Z" level=info msg="RemoveContainer for \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\" returns successfully" Mar 2 13:32:47.233369 kubelet[2875]: I0302 13:32:47.232841 2875 scope.go:117] "RemoveContainer" containerID="c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af" Mar 2 13:32:47.235982 containerd[1604]: time="2026-03-02T13:32:47.235946635Z" level=info msg="RemoveContainer for \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\"" Mar 2 13:32:47.242497 containerd[1604]: time="2026-03-02T13:32:47.242416049Z" level=info msg="RemoveContainer for \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\" returns successfully" Mar 2 13:32:47.242935 kubelet[2875]: I0302 13:32:47.242834 2875 scope.go:117] "RemoveContainer" containerID="e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf" Mar 2 13:32:47.271643 containerd[1604]: time="2026-03-02T13:32:47.271558162Z" level=info msg="RemoveContainer for \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\"" Mar 2 13:32:47.284023 containerd[1604]: time="2026-03-02T13:32:47.283957703Z" level=info msg="RemoveContainer for \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\" returns successfully" Mar 2 13:32:47.284730 kubelet[2875]: I0302 13:32:47.284685 2875 scope.go:117] "RemoveContainer" containerID="25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281" Mar 2 13:32:47.287853 containerd[1604]: time="2026-03-02T13:32:47.287419469Z" level=info msg="RemoveContainer for \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\"" Mar 2 13:32:47.302977 containerd[1604]: time="2026-03-02T13:32:47.302906128Z" level=info msg="RemoveContainer for \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\" returns successfully" Mar 2 13:32:47.304612 kubelet[2875]: I0302 13:32:47.304322 2875 scope.go:117] "RemoveContainer" containerID="67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43" Mar 2 13:32:47.305496 containerd[1604]: time="2026-03-02T13:32:47.305221865Z" level=error msg="ContainerStatus for \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\": not found" Mar 2 13:32:47.308855 kubelet[2875]: E0302 13:32:47.308219 2875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\": not found" containerID="67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43" Mar 2 13:32:47.308855 kubelet[2875]: I0302 13:32:47.308292 2875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43"} err="failed to get container status \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\": rpc error: code = NotFound desc = an error occurred when try to find container \"67fcea126105f167289b8eba8863e597d9c5218ca3ecd2e6af0b56ea3817fb43\": not found" Mar 2 13:32:47.308855 kubelet[2875]: I0302 13:32:47.308371 2875 scope.go:117] "RemoveContainer" containerID="e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2" Mar 2 13:32:47.309096 containerd[1604]: time="2026-03-02T13:32:47.308662589Z" level=error msg="ContainerStatus for \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\": not found" Mar 2 13:32:47.315872 kubelet[2875]: E0302 13:32:47.314678 2875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\": not found" containerID="e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2" Mar 2 13:32:47.315872 kubelet[2875]: I0302 13:32:47.314730 2875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2"} err="failed to get container status \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9f47ff565e4877382e4a8427b88ca00659b507cecde090edf0729b1ecdc97a2\": not found" Mar 2 13:32:47.315872 kubelet[2875]: I0302 13:32:47.314780 2875 scope.go:117] "RemoveContainer" containerID="c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af" Mar 2 13:32:47.315872 kubelet[2875]: E0302 13:32:47.315460 2875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\": not found" containerID="c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af" Mar 2 13:32:47.315872 kubelet[2875]: I0302 13:32:47.315492 2875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af"} err="failed to get container status \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\": rpc error: code = NotFound desc = an error occurred when try to find container \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\": not found" Mar 2 13:32:47.315872 kubelet[2875]: I0302 13:32:47.315515 2875 scope.go:117] "RemoveContainer" containerID="e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf" Mar 2 13:32:47.316326 containerd[1604]: time="2026-03-02T13:32:47.315215406Z" level=error msg="ContainerStatus for \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c24daa09c3b04d5c9577c91a6c0d7dc9a3f9cef381c903f42930cc733defb0af\": not found" Mar 2 13:32:47.316326 containerd[1604]: time="2026-03-02T13:32:47.315967426Z" level=error msg="ContainerStatus for \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\": not found" Mar 2 13:32:47.316435 kubelet[2875]: E0302 13:32:47.316087 2875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\": not found" containerID="e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf" Mar 2 13:32:47.316435 kubelet[2875]: I0302 13:32:47.316114 2875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf"} err="failed to get container status \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e549020d643f69e8ebadfb72f9bf82263f653b967e61c9a178d243de2f0ebfbf\": not found" Mar 2 13:32:47.316435 kubelet[2875]: I0302 13:32:47.316134 2875 scope.go:117] "RemoveContainer" containerID="25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281" Mar 2 13:32:47.316620 kubelet[2875]: E0302 13:32:47.316575 2875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\": not found" containerID="25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281" Mar 2 13:32:47.316675 containerd[1604]: time="2026-03-02T13:32:47.316384743Z" level=error msg="ContainerStatus for \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\": not found" Mar 2 13:32:47.316720 kubelet[2875]: I0302 13:32:47.316656 2875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281"} err="failed to get container status \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\": rpc error: code = NotFound desc = an error occurred when try to find container \"25816bd500f57d6502789ee1bbdce1c7c67d363a4cf725fe55936b3b8a396281\": not found" Mar 2 13:32:47.316720 kubelet[2875]: I0302 13:32:47.316713 2875 scope.go:117] "RemoveContainer" containerID="bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df" Mar 2 13:32:47.319410 containerd[1604]: time="2026-03-02T13:32:47.319350944Z" level=info msg="RemoveContainer for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\"" Mar 2 13:32:47.326217 containerd[1604]: time="2026-03-02T13:32:47.326141821Z" level=info msg="RemoveContainer for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" returns successfully" Mar 2 13:32:47.327058 containerd[1604]: time="2026-03-02T13:32:47.326952384Z" level=error msg="ContainerStatus for \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\": not found" Mar 2 13:32:47.327188 kubelet[2875]: I0302 13:32:47.326599 2875 scope.go:117] "RemoveContainer" containerID="bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df" Mar 2 13:32:47.327188 kubelet[2875]: E0302 13:32:47.327124 2875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\": not found" containerID="bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df" Mar 2 13:32:47.327338 kubelet[2875]: I0302 13:32:47.327207 2875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df"} err="failed to get container status \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcb04bab9168b98e1f6197654f3b43641d49bba1f4bfd5f1cf60875df713a6df\": not found" Mar 2 13:32:47.400420 systemd[1]: var-lib-kubelet-pods-956d1a8f\x2dc8bd\x2d4ef2\x2dabb7\x2d6e2e674444af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkpw5j.mount: Deactivated successfully. Mar 2 13:32:47.400594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6-shm.mount: Deactivated successfully. Mar 2 13:32:47.400723 systemd[1]: var-lib-kubelet-pods-534e8999\x2d18ef\x2d49e2\x2d8e65\x2d87338c77e12c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlq99.mount: Deactivated successfully. Mar 2 13:32:47.400866 systemd[1]: var-lib-kubelet-pods-534e8999\x2d18ef\x2d49e2\x2d8e65\x2d87338c77e12c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:32:47.400975 systemd[1]: var-lib-kubelet-pods-534e8999\x2d18ef\x2d49e2\x2d8e65\x2d87338c77e12c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:32:47.408572 containerd[1604]: time="2026-03-02T13:32:47.408441112Z" level=info msg="StopPodSandbox for \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\"" Mar 2 13:32:47.408962 containerd[1604]: time="2026-03-02T13:32:47.408888691Z" level=info msg="TearDown network for sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" successfully" Mar 2 13:32:47.408962 containerd[1604]: time="2026-03-02T13:32:47.408917329Z" level=info msg="StopPodSandbox for \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" returns successfully" Mar 2 13:32:47.409912 containerd[1604]: time="2026-03-02T13:32:47.409879550Z" level=info msg="RemovePodSandbox for \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\"" Mar 2 13:32:47.410001 containerd[1604]: time="2026-03-02T13:32:47.409932650Z" level=info msg="Forcibly stopping sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\"" Mar 2 13:32:47.410055 containerd[1604]: time="2026-03-02T13:32:47.410039487Z" level=info msg="TearDown network for sandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" successfully" Mar 2 13:32:47.411539 containerd[1604]: time="2026-03-02T13:32:47.411509294Z" level=info msg="Ensure that sandbox e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6 in task-service has been cleanup successfully" Mar 2 13:32:47.414540 containerd[1604]: time="2026-03-02T13:32:47.414508846Z" level=info msg="RemovePodSandbox \"e8758dc1368a3c58696a29bba49175dd0a6dcfb1d6cd8de1729406455e0110f6\" returns successfully" Mar 2 13:32:47.415183 containerd[1604]: time="2026-03-02T13:32:47.415024555Z" level=info msg="StopPodSandbox for \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\"" Mar 2 13:32:47.415183 containerd[1604]: time="2026-03-02T13:32:47.415132792Z" level=info msg="TearDown network for sandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" successfully" Mar 2 13:32:47.415393 containerd[1604]: time="2026-03-02T13:32:47.415154507Z" level=info msg="StopPodSandbox for \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" returns successfully" Mar 2 13:32:47.415974 containerd[1604]: time="2026-03-02T13:32:47.415929609Z" level=info msg="RemovePodSandbox for \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\"" Mar 2 13:32:47.416143 containerd[1604]: time="2026-03-02T13:32:47.416117935Z" level=info msg="Forcibly stopping sandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\"" Mar 2 13:32:47.416401 containerd[1604]: time="2026-03-02T13:32:47.416374795Z" level=info msg="TearDown network for sandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" successfully" Mar 2 13:32:47.417978 containerd[1604]: time="2026-03-02T13:32:47.417906793Z" level=info msg="Ensure that sandbox 1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a in task-service has been cleanup successfully" Mar 2 13:32:47.422858 containerd[1604]: time="2026-03-02T13:32:47.422726244Z" level=info msg="RemovePodSandbox \"1a984753dcd89ffd58b2d97338f955e74a006f684487e46ae4a72a693cc70e2a\" returns successfully" Mar 2 13:32:47.431741 kubelet[2875]: I0302 13:32:47.431583 2875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534e8999-18ef-49e2-8e65-87338c77e12c" path="/var/lib/kubelet/pods/534e8999-18ef-49e2-8e65-87338c77e12c/volumes" Mar 2 13:32:47.433773 kubelet[2875]: I0302 13:32:47.433595 2875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956d1a8f-c8bd-4ef2-abb7-6e2e674444af" path="/var/lib/kubelet/pods/956d1a8f-c8bd-4ef2-abb7-6e2e674444af/volumes" Mar 2 13:32:47.655225 kubelet[2875]: E0302 13:32:47.654967 2875 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:32:48.290479 sshd[4432]: Connection closed by 68.220.241.50 port 60480 Mar 2 13:32:48.292776 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:48.299754 systemd[1]: sshd@22-10.230.30.118:22-68.220.241.50:60480.service: Deactivated successfully. Mar 2 13:32:48.302677 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:32:48.304434 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:32:48.306624 systemd-logind[1545]: Removed session 24. Mar 2 13:32:48.390986 systemd[1]: Started sshd@24-10.230.30.118:22-68.220.241.50:60494.service - OpenSSH per-connection server daemon (68.220.241.50:60494). Mar 2 13:32:48.921132 sshd[4585]: Accepted publickey for core from 68.220.241.50 port 60494 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:48.923276 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:48.931017 systemd-logind[1545]: New session 25 of user core. Mar 2 13:32:48.944641 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:32:49.929224 kubelet[2875]: E0302 13:32:49.928209 2875 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-u4d8l.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-u4d8l.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Mar 2 13:32:49.929224 kubelet[2875]: E0302 13:32:49.928235 2875 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:srv-u4d8l.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-u4d8l.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Mar 2 13:32:49.929224 kubelet[2875]: E0302 13:32:49.928366 2875 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-u4d8l.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-u4d8l.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Mar 2 13:32:49.929224 kubelet[2875]: I0302 13:32:49.928365 2875 status_manager.go:895] "Failed to get status for pod" podUID="d97aa536-a8f8-4a75-bcd0-73f0f2b8f228" pod="kube-system/cilium-qvv7p" err="pods \"cilium-qvv7p\" is forbidden: User \"system:node:srv-u4d8l.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-u4d8l.gb1.brightbox.com' and this object" Mar 2 13:32:49.929944 kubelet[2875]: E0302 13:32:49.928462 2875 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-u4d8l.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-u4d8l.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Mar 2 13:32:49.934196 systemd[1]: Created slice kubepods-burstable-podd97aa536_a8f8_4a75_bcd0_73f0f2b8f228.slice - libcontainer container kubepods-burstable-podd97aa536_a8f8_4a75_bcd0_73f0f2b8f228.slice. Mar 2 13:32:49.972793 sshd[4588]: Connection closed by 68.220.241.50 port 60494 Mar 2 13:32:49.973836 sshd-session[4585]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:49.983901 systemd[1]: sshd@24-10.230.30.118:22-68.220.241.50:60494.service: Deactivated successfully. Mar 2 13:32:49.990146 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:32:49.993447 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:32:49.998290 systemd-logind[1545]: Removed session 25. Mar 2 13:32:50.059851 kubelet[2875]: I0302 13:32:50.059783 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-xtables-lock\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.060266 kubelet[2875]: I0302 13:32:50.060225 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-clustermesh-secrets\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.060437 kubelet[2875]: I0302 13:32:50.060400 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cilium-ipsec-secrets\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.060731 kubelet[2875]: I0302 13:32:50.060652 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cilium-run\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.060731 kubelet[2875]: I0302 13:32:50.060698 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-lib-modules\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.060923 kubelet[2875]: I0302 13:32:50.060886 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-hostproc\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.061074 kubelet[2875]: I0302 13:32:50.061051 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cilium-cgroup\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.061074 kubelet[2875]: I0302 13:32:50.061118 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-host-proc-sys-kernel\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.061426 kubelet[2875]: I0302 13:32:50.061345 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cni-path\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.061589 kubelet[2875]: I0302 13:32:50.061546 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cilium-config-path\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.061729 kubelet[2875]: I0302 13:32:50.061695 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4z8\" (UniqueName: \"kubernetes.io/projected/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-kube-api-access-5q4z8\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.061869 kubelet[2875]: I0302 13:32:50.061847 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-etc-cni-netd\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.062012 kubelet[2875]: I0302 13:32:50.061989 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-host-proc-sys-net\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.062202 kubelet[2875]: I0302 13:32:50.062178 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-hubble-tls\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.062367 kubelet[2875]: I0302 13:32:50.062312 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-bpf-maps\") pod \"cilium-qvv7p\" (UID: \"d97aa536-a8f8-4a75-bcd0-73f0f2b8f228\") " pod="kube-system/cilium-qvv7p" Mar 2 13:32:50.078762 systemd[1]: Started sshd@25-10.230.30.118:22-68.220.241.50:60498.service - OpenSSH per-connection server daemon (68.220.241.50:60498). Mar 2 13:32:50.335188 kubelet[2875]: I0302 13:32:50.335090 2875 setters.go:618] "Node became not ready" node="srv-u4d8l.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:32:50Z","lastTransitionTime":"2026-03-02T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:32:50.584236 sshd[4598]: Accepted publickey for core from 68.220.241.50 port 60498 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:50.586217 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:50.593797 systemd-logind[1545]: New session 26 of user core. Mar 2 13:32:50.604635 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:32:50.857822 sshd[4602]: Connection closed by 68.220.241.50 port 60498 Mar 2 13:32:50.858695 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Mar 2 13:32:50.866380 systemd[1]: sshd@25-10.230.30.118:22-68.220.241.50:60498.service: Deactivated successfully. Mar 2 13:32:50.870467 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:32:50.874612 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:32:50.876479 systemd-logind[1545]: Removed session 26. Mar 2 13:32:50.970647 systemd[1]: Started sshd@26-10.230.30.118:22-68.220.241.50:60508.service - OpenSSH per-connection server daemon (68.220.241.50:60508). Mar 2 13:32:51.166278 kubelet[2875]: E0302 13:32:51.166014 2875 projected.go:264] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.166278 kubelet[2875]: E0302 13:32:51.166079 2875 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qvv7p: failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.166278 kubelet[2875]: E0302 13:32:51.166266 2875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-hubble-tls podName:d97aa536-a8f8-4a75-bcd0-73f0f2b8f228 nodeName:}" failed. No retries permitted until 2026-03-02 13:32:51.666201411 +0000 UTC m=+124.471472694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-hubble-tls") pod "cilium-qvv7p" (UID: "d97aa536-a8f8-4a75-bcd0-73f0f2b8f228") : failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.167864 kubelet[2875]: E0302 13:32:51.166731 2875 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.167864 kubelet[2875]: E0302 13:32:51.166786 2875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cilium-ipsec-secrets podName:d97aa536-a8f8-4a75-bcd0-73f0f2b8f228 nodeName:}" failed. No retries permitted until 2026-03-02 13:32:51.666771683 +0000 UTC m=+124.472042967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-cilium-ipsec-secrets") pod "cilium-qvv7p" (UID: "d97aa536-a8f8-4a75-bcd0-73f0f2b8f228") : failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.167864 kubelet[2875]: E0302 13:32:51.166826 2875 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.167864 kubelet[2875]: E0302 13:32:51.166887 2875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-clustermesh-secrets podName:d97aa536-a8f8-4a75-bcd0-73f0f2b8f228 nodeName:}" failed. No retries permitted until 2026-03-02 13:32:51.666871933 +0000 UTC m=+124.472143217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/d97aa536-a8f8-4a75-bcd0-73f0f2b8f228-clustermesh-secrets") pod "cilium-qvv7p" (UID: "d97aa536-a8f8-4a75-bcd0-73f0f2b8f228") : failed to sync secret cache: timed out waiting for the condition Mar 2 13:32:51.475241 sshd[4609]: Accepted publickey for core from 68.220.241.50 port 60508 ssh2: RSA SHA256:eJfPTcu5Pm24mvlygD7W7Kd1ohgQtGwIItOmwstcNsE Mar 2 13:32:51.476879 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:32:51.485810 systemd-logind[1545]: New session 27 of user core. Mar 2 13:32:51.497565 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:32:51.742365 containerd[1604]: time="2026-03-02T13:32:51.742241130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvv7p,Uid:d97aa536-a8f8-4a75-bcd0-73f0f2b8f228,Namespace:kube-system,Attempt:0,}" Mar 2 13:32:51.790766 containerd[1604]: time="2026-03-02T13:32:51.789879342Z" level=info msg="connecting to shim 5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7" address="unix:///run/containerd/s/4f3d2d670f4a411d9261697d39145232e29bf2b83e3de716bcdf9d3f4138fbfc" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:32:51.838550 systemd[1]: Started cri-containerd-5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7.scope - libcontainer container 5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7. Mar 2 13:32:51.913481 containerd[1604]: time="2026-03-02T13:32:51.913306411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvv7p,Uid:d97aa536-a8f8-4a75-bcd0-73f0f2b8f228,Namespace:kube-system,Attempt:0,} returns sandbox id \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\"" Mar 2 13:32:51.921195 containerd[1604]: time="2026-03-02T13:32:51.921074732Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:32:51.935295 containerd[1604]: time="2026-03-02T13:32:51.934415734Z" level=info msg="Container 87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:32:51.940998 containerd[1604]: time="2026-03-02T13:32:51.940958259Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57\"" Mar 2 13:32:51.943144 containerd[1604]: time="2026-03-02T13:32:51.943098454Z" level=info msg="StartContainer for \"87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57\"" Mar 2 13:32:51.945802 containerd[1604]: time="2026-03-02T13:32:51.945671629Z" level=info msg="connecting to shim 87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57" address="unix:///run/containerd/s/4f3d2d670f4a411d9261697d39145232e29bf2b83e3de716bcdf9d3f4138fbfc" protocol=ttrpc version=3 Mar 2 13:32:51.972455 systemd[1]: Started cri-containerd-87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57.scope - libcontainer container 87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57. Mar 2 13:32:52.022012 containerd[1604]: time="2026-03-02T13:32:52.021747886Z" level=info msg="StartContainer for \"87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57\" returns successfully" Mar 2 13:32:52.043767 systemd[1]: cri-containerd-87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57.scope: Deactivated successfully. Mar 2 13:32:52.044576 systemd[1]: cri-containerd-87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57.scope: Consumed 35ms CPU time, 9.5M memory peak, 3.1M read from disk. Mar 2 13:32:52.049312 containerd[1604]: time="2026-03-02T13:32:52.049148278Z" level=info msg="received container exit event container_id:\"87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57\" id:\"87a830738b20781532b754ba878b0cdd132b565d999170b2d6d3dfac6c8bde57\" pid:4680 exited_at:{seconds:1772458372 nanos:48603751}" Mar 2 13:32:52.205686 containerd[1604]: time="2026-03-02T13:32:52.205556378Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:32:52.220098 containerd[1604]: time="2026-03-02T13:32:52.219723423Z" level=info msg="Container 5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:32:52.251768 containerd[1604]: time="2026-03-02T13:32:52.251699358Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58\"" Mar 2 13:32:52.253190 containerd[1604]: time="2026-03-02T13:32:52.252546804Z" level=info msg="StartContainer for \"5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58\"" Mar 2 13:32:52.253888 containerd[1604]: time="2026-03-02T13:32:52.253850166Z" level=info msg="connecting to shim 5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58" address="unix:///run/containerd/s/4f3d2d670f4a411d9261697d39145232e29bf2b83e3de716bcdf9d3f4138fbfc" protocol=ttrpc version=3 Mar 2 13:32:52.281650 systemd[1]: Started cri-containerd-5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58.scope - libcontainer container 5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58. Mar 2 13:32:52.333149 containerd[1604]: time="2026-03-02T13:32:52.333102822Z" level=info msg="StartContainer for \"5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58\" returns successfully" Mar 2 13:32:52.349402 systemd[1]: cri-containerd-5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58.scope: Deactivated successfully. Mar 2 13:32:52.349861 systemd[1]: cri-containerd-5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58.scope: Consumed 33ms CPU time, 7.5M memory peak, 2.2M read from disk. Mar 2 13:32:52.351542 containerd[1604]: time="2026-03-02T13:32:52.351431497Z" level=info msg="received container exit event container_id:\"5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58\" id:\"5990d92257dfceef7c8a56a509a860b84fd6d2836c8963af888b5f7df4eeae58\" pid:4727 exited_at:{seconds:1772458372 nanos:350242319}" Mar 2 13:32:52.658433 kubelet[2875]: E0302 13:32:52.657348 2875 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:32:53.214660 containerd[1604]: time="2026-03-02T13:32:53.214590443Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:32:53.239941 containerd[1604]: time="2026-03-02T13:32:53.238920932Z" level=info msg="Container 73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:32:53.254459 containerd[1604]: time="2026-03-02T13:32:53.254402790Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16\"" Mar 2 13:32:53.255506 containerd[1604]: time="2026-03-02T13:32:53.255441720Z" level=info msg="StartContainer for \"73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16\"" Mar 2 13:32:53.258995 containerd[1604]: time="2026-03-02T13:32:53.258948386Z" level=info msg="connecting to shim 73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16" address="unix:///run/containerd/s/4f3d2d670f4a411d9261697d39145232e29bf2b83e3de716bcdf9d3f4138fbfc" protocol=ttrpc version=3 Mar 2 13:32:53.293401 systemd[1]: Started cri-containerd-73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16.scope - libcontainer container 73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16. Mar 2 13:32:53.399607 containerd[1604]: time="2026-03-02T13:32:53.399507831Z" level=info msg="StartContainer for \"73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16\" returns successfully" Mar 2 13:32:53.408457 systemd[1]: cri-containerd-73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16.scope: Deactivated successfully. Mar 2 13:32:53.411131 containerd[1604]: time="2026-03-02T13:32:53.411069618Z" level=info msg="received container exit event container_id:\"73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16\" id:\"73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16\" pid:4772 exited_at:{seconds:1772458373 nanos:409730732}" Mar 2 13:32:53.452948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73cc5dbe534c9b1f6a5761334a9202db2d1140a54d604000378bb0ceccb60d16-rootfs.mount: Deactivated successfully. Mar 2 13:32:54.218130 containerd[1604]: time="2026-03-02T13:32:54.218006697Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:32:54.232290 containerd[1604]: time="2026-03-02T13:32:54.232215173Z" level=info msg="Container d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:32:54.240887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318440351.mount: Deactivated successfully. Mar 2 13:32:54.249188 containerd[1604]: time="2026-03-02T13:32:54.246542170Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5\"" Mar 2 13:32:54.249188 containerd[1604]: time="2026-03-02T13:32:54.248065110Z" level=info msg="StartContainer for \"d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5\"" Mar 2 13:32:54.251296 containerd[1604]: time="2026-03-02T13:32:54.251248385Z" level=info msg="connecting to shim d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5" address="unix:///run/containerd/s/4f3d2d670f4a411d9261697d39145232e29bf2b83e3de716bcdf9d3f4138fbfc" protocol=ttrpc version=3 Mar 2 13:32:54.288710 systemd[1]: Started cri-containerd-d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5.scope - libcontainer container d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5. Mar 2 13:32:54.341586 systemd[1]: cri-containerd-d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5.scope: Deactivated successfully. Mar 2 13:32:54.345119 containerd[1604]: time="2026-03-02T13:32:54.345065656Z" level=info msg="received container exit event container_id:\"d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5\" id:\"d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5\" pid:4813 exited_at:{seconds:1772458374 nanos:342315526}" Mar 2 13:32:54.349362 containerd[1604]: time="2026-03-02T13:32:54.349182471Z" level=info msg="StartContainer for \"d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5\" returns successfully" Mar 2 13:32:54.381277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5c5d0533ff4f52645f44166b52aff643845962a93d7521057012934e8986ec5-rootfs.mount: Deactivated successfully. Mar 2 13:32:55.229813 containerd[1604]: time="2026-03-02T13:32:55.229744464Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:32:55.247322 containerd[1604]: time="2026-03-02T13:32:55.246305870Z" level=info msg="Container e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:32:55.262261 containerd[1604]: time="2026-03-02T13:32:55.262203536Z" level=info msg="CreateContainer within sandbox \"5790c30b3750768e773d02d7ae130c5246f49c2f2362dd822bbd5c6bb55176f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b\"" Mar 2 13:32:55.265206 containerd[1604]: time="2026-03-02T13:32:55.264980674Z" level=info msg="StartContainer for \"e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b\"" Mar 2 13:32:55.266888 containerd[1604]: time="2026-03-02T13:32:55.266822258Z" level=info msg="connecting to shim e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b" address="unix:///run/containerd/s/4f3d2d670f4a411d9261697d39145232e29bf2b83e3de716bcdf9d3f4138fbfc" protocol=ttrpc version=3 Mar 2 13:32:55.304581 systemd[1]: Started cri-containerd-e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b.scope - libcontainer container e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b. Mar 2 13:32:55.373090 containerd[1604]: time="2026-03-02T13:32:55.372962486Z" level=info msg="StartContainer for \"e66f5d9a57c97a8193c90ef63333c840620c599b122f5930838b42612d92f08b\" returns successfully" Mar 2 13:32:56.205251 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 2 13:32:56.295768 kubelet[2875]: I0302 13:32:56.291719 2875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qvv7p" podStartSLOduration=7.291649828 podStartE2EDuration="7.291649828s" podCreationTimestamp="2026-03-02 13:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:32:56.291587477 +0000 UTC m=+129.096858802" watchObservedRunningTime="2026-03-02 13:32:56.291649828 +0000 UTC m=+129.096921122" Mar 2 13:33:00.110218 systemd-networkd[1503]: lxc_health: Link UP Mar 2 13:33:00.144326 systemd-networkd[1503]: lxc_health: Gained carrier Mar 2 13:33:01.883586 systemd-networkd[1503]: lxc_health: Gained IPv6LL Mar 2 13:33:04.946956 kubelet[2875]: E0302 13:33:04.946876 2875 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:57460->127.0.0.1:35975: read tcp 127.0.0.1:57460->127.0.0.1:35975: read: connection reset by peer Mar 2 13:33:04.949378 kubelet[2875]: E0302 13:33:04.949326 2875 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57460->127.0.0.1:35975: write tcp 127.0.0.1:57460->127.0.0.1:35975: write: broken pipe Mar 2 13:33:07.214957 sshd[4612]: Connection closed by 68.220.241.50 port 60508 Mar 2 13:33:07.217043 sshd-session[4609]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:07.233886 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:33:07.235990 systemd[1]: sshd@26-10.230.30.118:22-68.220.241.50:60508.service: Deactivated successfully. Mar 2 13:33:07.239652 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:33:07.242773 systemd-logind[1545]: Removed session 27.