Jan 23 20:18:46.965405 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 20:18:46.965459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 20:18:46.965474 kernel: BIOS-provided physical RAM map: Jan 23 20:18:46.965485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 20:18:46.965500 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 20:18:46.965511 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 20:18:46.965523 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 23 20:18:46.965534 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 23 20:18:46.965544 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 20:18:46.965555 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 20:18:46.965566 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 20:18:46.965577 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 20:18:46.965587 kernel: NX (Execute Disable) protection: active Jan 23 20:18:46.965602 kernel: APIC: Static calls initialized Jan 23 20:18:46.965615 kernel: SMBIOS 2.8 present. Jan 23 20:18:46.965628 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 23 20:18:46.965639 kernel: DMI: Memory slots populated: 1/1 Jan 23 20:18:46.965651 kernel: Hypervisor detected: KVM Jan 23 20:18:46.965662 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 20:18:46.965678 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 20:18:46.965690 kernel: kvm-clock: using sched offset of 5918202525 cycles Jan 23 20:18:46.965702 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 20:18:46.965714 kernel: tsc: Detected 2499.998 MHz processor Jan 23 20:18:46.965726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 20:18:46.965738 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 20:18:46.965749 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 20:18:46.965761 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 20:18:46.965773 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 20:18:46.965789 kernel: Using GB pages for direct mapping Jan 23 20:18:46.965801 kernel: ACPI: Early table checksum verification disabled Jan 23 20:18:46.965813 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 23 20:18:46.965824 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965836 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965848 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965860 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 23 20:18:46.965871 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965883 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965899 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965911 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:18:46.965923 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 23 20:18:46.965940 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 23 20:18:46.965953 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 23 20:18:46.965976 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 23 20:18:46.965994 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 23 20:18:46.966006 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 23 20:18:46.966019 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 23 20:18:46.966031 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 20:18:46.966043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 23 20:18:46.966055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 23 20:18:46.966068 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 23 20:18:46.966080 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 23 20:18:46.966118 kernel: Zone ranges: Jan 23 20:18:46.966131 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 20:18:46.966143 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 23 20:18:46.966155 kernel: Normal empty Jan 23 20:18:46.966167 kernel: Device empty Jan 23 20:18:46.966179 kernel: Movable zone start for each node Jan 23 20:18:46.966192 kernel: Early memory node ranges Jan 23 20:18:46.966204 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 20:18:46.966216 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 23 20:18:46.966233 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 23 20:18:46.966245 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 20:18:46.966258 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 20:18:46.966270 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 23 20:18:46.966282 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 20:18:46.966295 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 20:18:46.966307 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 20:18:46.966319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 20:18:46.966332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 20:18:46.966348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 20:18:46.966360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 20:18:46.966373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 20:18:46.966385 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 20:18:46.966397 kernel: TSC deadline timer available Jan 23 20:18:46.966409 kernel: CPU topo: Max. logical packages: 16 Jan 23 20:18:46.966422 kernel: CPU topo: Max. logical dies: 16 Jan 23 20:18:46.966434 kernel: CPU topo: Max. dies per package: 1 Jan 23 20:18:46.968612 kernel: CPU topo: Max. threads per core: 1 Jan 23 20:18:46.968627 kernel: CPU topo: Num. cores per package: 1 Jan 23 20:18:46.968646 kernel: CPU topo: Num. threads per package: 1 Jan 23 20:18:46.968659 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 23 20:18:46.968671 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 20:18:46.968684 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 20:18:46.968696 kernel: Booting paravirtualized kernel on KVM Jan 23 20:18:46.968709 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 20:18:46.968721 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 23 20:18:46.968734 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 23 20:18:46.968746 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 23 20:18:46.968763 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 23 20:18:46.968775 kernel: kvm-guest: PV spinlocks enabled Jan 23 20:18:46.968787 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 20:18:46.968801 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 20:18:46.968814 kernel: random: crng init done Jan 23 20:18:46.968827 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 20:18:46.968839 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 20:18:46.968851 kernel: Fallback order for Node 0: 0 Jan 23 20:18:46.968868 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 23 20:18:46.968881 kernel: Policy zone: DMA32 Jan 23 20:18:46.968893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 20:18:46.968905 kernel: software IO TLB: area num 16. Jan 23 20:18:46.968917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 23 20:18:46.968930 kernel: Kernel/User page tables isolation: enabled Jan 23 20:18:46.968942 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 20:18:46.968955 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 20:18:46.968983 kernel: Dynamic Preempt: voluntary Jan 23 20:18:46.969002 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 20:18:46.969015 kernel: rcu: RCU event tracing is enabled. Jan 23 20:18:46.969028 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 23 20:18:46.969040 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 20:18:46.969052 kernel: Rude variant of Tasks RCU enabled. Jan 23 20:18:46.969065 kernel: Tracing variant of Tasks RCU enabled. Jan 23 20:18:46.969077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 20:18:46.970131 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 23 20:18:46.970147 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 20:18:46.970167 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 20:18:46.970179 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 20:18:46.970192 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 23 20:18:46.970204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 20:18:46.970228 kernel: Console: colour VGA+ 80x25 Jan 23 20:18:46.970245 kernel: printk: legacy console [tty0] enabled Jan 23 20:18:46.970258 kernel: printk: legacy console [ttyS0] enabled Jan 23 20:18:46.970271 kernel: ACPI: Core revision 20240827 Jan 23 20:18:46.970284 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 20:18:46.970297 kernel: x2apic enabled Jan 23 20:18:46.970310 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 20:18:46.970323 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 20:18:46.970340 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 23 20:18:46.970353 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 20:18:46.970366 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 20:18:46.970379 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 20:18:46.970391 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 20:18:46.970408 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 20:18:46.970421 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 20:18:46.970433 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 20:18:46.970446 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 20:18:46.970459 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 20:18:46.970471 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 20:18:46.970484 kernel: MMIO Stale Data: Unknown: No mitigations Jan 23 20:18:46.970496 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 23 20:18:46.970509 kernel: active return thunk: its_return_thunk Jan 23 20:18:46.970522 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 20:18:46.970534 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 20:18:46.970551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 20:18:46.970564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 20:18:46.970577 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 20:18:46.970590 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 20:18:46.970602 kernel: Freeing SMP alternatives memory: 32K Jan 23 20:18:46.970615 kernel: pid_max: default: 32768 minimum: 301 Jan 23 20:18:46.970628 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 20:18:46.970640 kernel: landlock: Up and running. Jan 23 20:18:46.970653 kernel: SELinux: Initializing. Jan 23 20:18:46.970665 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 20:18:46.970678 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 20:18:46.970691 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 23 20:18:46.970708 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 23 20:18:46.970721 kernel: signal: max sigframe size: 1776 Jan 23 20:18:46.970734 kernel: rcu: Hierarchical SRCU implementation. Jan 23 20:18:46.970747 kernel: rcu: Max phase no-delay instances is 400. Jan 23 20:18:46.970760 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 23 20:18:46.970773 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 20:18:46.970786 kernel: smp: Bringing up secondary CPUs ... Jan 23 20:18:46.970799 kernel: smpboot: x86: Booting SMP configuration: Jan 23 20:18:46.970812 kernel: .... node #0, CPUs: #1 Jan 23 20:18:46.970829 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 20:18:46.970842 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 23 20:18:46.970856 kernel: Memory: 1887488K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 203112K reserved, 0K cma-reserved) Jan 23 20:18:46.970869 kernel: devtmpfs: initialized Jan 23 20:18:46.970894 kernel: x86/mm: Memory block size: 128MB Jan 23 20:18:46.970906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 20:18:46.970918 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 23 20:18:46.970930 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 20:18:46.970942 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 20:18:46.970983 kernel: audit: initializing netlink subsys (disabled) Jan 23 20:18:46.970996 kernel: audit: type=2000 audit(1769199522.812:1): state=initialized audit_enabled=0 res=1 Jan 23 20:18:46.971009 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 20:18:46.971021 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 20:18:46.971034 kernel: cpuidle: using governor menu Jan 23 20:18:46.971047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 20:18:46.971060 kernel: dca service started, version 1.12.1 Jan 23 20:18:46.971073 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 20:18:46.971097 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 20:18:46.971118 kernel: PCI: Using configuration type 1 for base access Jan 23 20:18:46.971131 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 20:18:46.971144 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 20:18:46.971157 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 20:18:46.971170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 20:18:46.971183 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 20:18:46.971196 kernel: ACPI: Added _OSI(Module Device) Jan 23 20:18:46.971209 kernel: ACPI: Added _OSI(Processor Device) Jan 23 20:18:46.971222 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 20:18:46.971239 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 20:18:46.971252 kernel: ACPI: Interpreter enabled Jan 23 20:18:46.971265 kernel: ACPI: PM: (supports S0 S5) Jan 23 20:18:46.971277 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 20:18:46.971290 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 20:18:46.971303 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 20:18:46.971316 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 20:18:46.971329 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 20:18:46.971636 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 20:18:46.971823 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 20:18:46.972015 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 20:18:46.972036 kernel: PCI host bridge to bus 0000:00 Jan 23 20:18:46.975189 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 20:18:46.975349 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 20:18:46.975498 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 20:18:46.975654 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 20:18:46.975801 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 20:18:46.975946 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 23 20:18:46.980489 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 20:18:46.980718 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 20:18:46.980938 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 23 20:18:46.981149 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 23 20:18:46.981335 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 23 20:18:46.981497 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 23 20:18:46.981668 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 20:18:46.981873 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.982051 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 23 20:18:46.982242 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 20:18:46.982413 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 20:18:46.982573 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 20:18:46.982762 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.982925 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 23 20:18:46.983124 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 20:18:46.983293 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 20:18:46.983453 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 20:18:46.983655 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.983818 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 23 20:18:46.983990 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 20:18:46.984176 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 20:18:46.984336 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 20:18:46.984506 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.984667 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 23 20:18:46.984834 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 20:18:46.985007 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 20:18:46.988278 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 20:18:46.988493 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.988669 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 23 20:18:46.988839 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 20:18:46.989020 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 20:18:46.989226 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 20:18:46.989415 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.989580 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 23 20:18:46.989741 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 20:18:46.989902 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 20:18:46.990079 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 20:18:46.990300 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.990491 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 23 20:18:46.990652 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 20:18:46.990811 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 20:18:46.991007 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 20:18:46.991666 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:18:46.991836 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 23 20:18:46.992022 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 20:18:46.992234 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 20:18:46.992413 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 20:18:46.992608 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 20:18:46.992862 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 20:18:46.993044 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 23 20:18:46.993229 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 23 20:18:46.993391 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 23 20:18:46.993584 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 20:18:46.993747 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 23 20:18:46.993908 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 23 20:18:46.994106 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 23 20:18:46.994304 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 20:18:46.994469 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 20:18:46.994651 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 20:18:46.994821 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 23 20:18:46.994995 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 23 20:18:46.995220 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 20:18:46.995383 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 20:18:46.995605 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 20:18:46.998385 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 23 20:18:46.998584 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 20:18:46.998758 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 20:18:46.998930 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 20:18:46.999181 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 20:18:46.999375 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 23 20:18:46.999555 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 23 20:18:46.999741 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 20:18:46.999954 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 20:18:47.001211 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 23 20:18:47.001388 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 20:18:47.001607 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 20:18:47.001802 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 23 20:18:47.001980 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 20:18:47.002197 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 20:18:47.002365 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 20:18:47.002532 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 20:18:47.002697 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 20:18:47.002861 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 20:18:47.002882 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 20:18:47.002897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 20:18:47.002910 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 20:18:47.002932 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 20:18:47.002946 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 20:18:47.002959 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 20:18:47.002985 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 20:18:47.002999 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 20:18:47.003013 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 20:18:47.003026 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 20:18:47.003039 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 20:18:47.003053 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 20:18:47.003072 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 20:18:47.006130 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 20:18:47.006152 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 20:18:47.006167 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 20:18:47.006180 kernel: iommu: Default domain type: Translated Jan 23 20:18:47.006194 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 20:18:47.006207 kernel: PCI: Using ACPI for IRQ routing Jan 23 20:18:47.006220 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 20:18:47.006235 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 20:18:47.006256 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 23 20:18:47.006456 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 20:18:47.006629 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 20:18:47.006801 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 20:18:47.006821 kernel: vgaarb: loaded Jan 23 20:18:47.006835 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 20:18:47.006848 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 20:18:47.006862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 20:18:47.006883 kernel: pnp: PnP ACPI init Jan 23 20:18:47.007120 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 20:18:47.007143 kernel: pnp: PnP ACPI: found 5 devices Jan 23 20:18:47.007157 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 20:18:47.007171 kernel: NET: Registered PF_INET protocol family Jan 23 20:18:47.007184 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 20:18:47.007198 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 20:18:47.007211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 20:18:47.007232 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 20:18:47.007245 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 20:18:47.007258 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 20:18:47.007271 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 20:18:47.007285 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 20:18:47.007298 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 20:18:47.007312 kernel: NET: Registered PF_XDP protocol family Jan 23 20:18:47.007475 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 23 20:18:47.007672 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 20:18:47.007843 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 20:18:47.008044 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 20:18:47.010262 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 20:18:47.010435 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 20:18:47.010600 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 20:18:47.010764 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 20:18:47.010926 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 20:18:47.013193 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 20:18:47.013378 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 20:18:47.013550 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 20:18:47.013716 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 20:18:47.013882 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 20:18:47.014061 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 20:18:47.014258 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 20:18:47.014441 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 20:18:47.014644 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 20:18:47.014807 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 20:18:47.014980 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 20:18:47.017180 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 20:18:47.017350 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 20:18:47.017514 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 20:18:47.017695 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 20:18:47.017867 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 20:18:47.018048 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 20:18:47.018235 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 20:18:47.018409 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 20:18:47.018570 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 20:18:47.018761 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 20:18:47.018925 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 20:18:47.021138 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 20:18:47.021339 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 20:18:47.021502 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 20:18:47.021666 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 20:18:47.021846 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 20:18:47.022025 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 20:18:47.022214 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 20:18:47.022387 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 20:18:47.022550 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 20:18:47.022711 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 20:18:47.022872 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 20:18:47.023073 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 20:18:47.025275 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 20:18:47.025472 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 20:18:47.025634 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 20:18:47.025804 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 20:18:47.026019 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 20:18:47.026211 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 20:18:47.026375 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 20:18:47.026542 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 20:18:47.026694 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 20:18:47.026843 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 20:18:47.027006 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 20:18:47.029249 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 20:18:47.029436 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 23 20:18:47.029621 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 20:18:47.029778 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 23 20:18:47.029932 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 20:18:47.030154 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 23 20:18:47.030335 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 23 20:18:47.030491 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 23 20:18:47.030654 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 20:18:47.030833 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 23 20:18:47.031005 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 23 20:18:47.032828 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 20:18:47.033047 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 23 20:18:47.033294 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 23 20:18:47.033455 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 20:18:47.033688 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 23 20:18:47.033846 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 23 20:18:47.034045 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 20:18:47.034257 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 23 20:18:47.034408 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 23 20:18:47.034555 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 20:18:47.034757 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 23 20:18:47.034920 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 23 20:18:47.035112 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 20:18:47.035281 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 23 20:18:47.035435 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 23 20:18:47.035587 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 20:18:47.035621 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 20:18:47.035634 kernel: PCI: CLS 0 bytes, default 64 Jan 23 20:18:47.035655 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 20:18:47.035682 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 23 20:18:47.035696 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 20:18:47.035710 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 20:18:47.035724 kernel: Initialise system trusted keyrings Jan 23 20:18:47.035743 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 20:18:47.035757 kernel: Key type asymmetric registered Jan 23 20:18:47.035771 kernel: Asymmetric key parser 'x509' registered Jan 23 20:18:47.035789 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 20:18:47.035803 kernel: io scheduler mq-deadline registered Jan 23 20:18:47.035816 kernel: io scheduler kyber registered Jan 23 20:18:47.035830 kernel: io scheduler bfq registered Jan 23 20:18:47.036010 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 20:18:47.036194 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 20:18:47.036363 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.036529 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 20:18:47.036719 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 20:18:47.036882 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.037059 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 20:18:47.037245 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 20:18:47.037439 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.037614 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 20:18:47.037783 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 20:18:47.037944 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.038158 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 20:18:47.038324 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 20:18:47.038487 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.038650 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 20:18:47.038824 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 20:18:47.039003 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.039191 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 20:18:47.039353 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 20:18:47.039515 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.039676 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 20:18:47.039846 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 20:18:47.040021 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:18:47.040043 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 20:18:47.040058 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 20:18:47.040072 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 20:18:47.040168 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 20:18:47.040187 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 20:18:47.040208 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 20:18:47.040222 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 20:18:47.040236 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 20:18:47.040250 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 20:18:47.040423 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 20:18:47.040580 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 20:18:47.040733 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T20:18:46 UTC (1769199526) Jan 23 20:18:47.040885 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 20:18:47.040921 kernel: intel_pstate: CPU model not supported Jan 23 20:18:47.040935 kernel: NET: Registered PF_INET6 protocol family Jan 23 20:18:47.040949 kernel: Segment Routing with IPv6 Jan 23 20:18:47.040983 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 20:18:47.040998 kernel: NET: Registered PF_PACKET protocol family Jan 23 20:18:47.041012 kernel: Key type dns_resolver registered Jan 23 20:18:47.041026 kernel: IPI shorthand broadcast: enabled Jan 23 20:18:47.041040 kernel: sched_clock: Marking stable (3507004741, 229007295)->(3864816065, -128804029) Jan 23 20:18:47.041053 kernel: registered taskstats version 1 Jan 23 20:18:47.041073 kernel: Loading compiled-in X.509 certificates Jan 23 20:18:47.041102 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 20:18:47.041117 kernel: Demotion targets for Node 0: null Jan 23 20:18:47.041130 kernel: Key type .fscrypt registered Jan 23 20:18:47.041144 kernel: Key type fscrypt-provisioning registered Jan 23 20:18:47.041158 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 20:18:47.041172 kernel: ima: Allocated hash algorithm: sha1 Jan 23 20:18:47.041185 kernel: ima: No architecture policies found Jan 23 20:18:47.041199 kernel: clk: Disabling unused clocks Jan 23 20:18:47.041218 kernel: Warning: unable to open an initial console. Jan 23 20:18:47.041233 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 20:18:47.041259 kernel: Write protecting the kernel read-only data: 40960k Jan 23 20:18:47.041272 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 20:18:47.041285 kernel: Run /init as init process Jan 23 20:18:47.041303 kernel: with arguments: Jan 23 20:18:47.041316 kernel: /init Jan 23 20:18:47.041329 kernel: with environment: Jan 23 20:18:47.041342 kernel: HOME=/ Jan 23 20:18:47.041355 kernel: TERM=linux Jan 23 20:18:47.041382 systemd[1]: Successfully made /usr/ read-only. Jan 23 20:18:47.041401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 20:18:47.041417 systemd[1]: Detected virtualization kvm. Jan 23 20:18:47.041431 systemd[1]: Detected architecture x86-64. Jan 23 20:18:47.041445 systemd[1]: Running in initrd. Jan 23 20:18:47.041458 systemd[1]: No hostname configured, using default hostname. Jan 23 20:18:47.041473 systemd[1]: Hostname set to . Jan 23 20:18:47.041492 systemd[1]: Initializing machine ID from VM UUID. Jan 23 20:18:47.041506 systemd[1]: Queued start job for default target initrd.target. Jan 23 20:18:47.041520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 20:18:47.041534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 20:18:47.041562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 20:18:47.041577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 20:18:47.041592 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 20:18:47.041612 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 20:18:47.041628 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 20:18:47.041643 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 20:18:47.041657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 20:18:47.041672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 20:18:47.041687 systemd[1]: Reached target paths.target - Path Units. Jan 23 20:18:47.041701 systemd[1]: Reached target slices.target - Slice Units. Jan 23 20:18:47.041716 systemd[1]: Reached target swap.target - Swaps. Jan 23 20:18:47.041735 systemd[1]: Reached target timers.target - Timer Units. Jan 23 20:18:47.041750 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 20:18:47.041764 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 20:18:47.041779 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 20:18:47.041794 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 20:18:47.041808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 20:18:47.041823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 20:18:47.041837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 20:18:47.041857 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 20:18:47.041872 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 20:18:47.041887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 20:18:47.041901 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 20:18:47.041916 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 20:18:47.041931 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 20:18:47.041946 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 20:18:47.041969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 20:18:47.041987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:18:47.042007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 20:18:47.042075 systemd-journald[211]: Collecting audit messages is disabled. Jan 23 20:18:47.042136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 20:18:47.042152 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 20:18:47.042167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 20:18:47.042182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 20:18:47.042196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 20:18:47.042211 kernel: Bridge firewalling registered Jan 23 20:18:47.042231 systemd-journald[211]: Journal started Jan 23 20:18:47.042265 systemd-journald[211]: Runtime Journal (/run/log/journal/9a1b87df16124c7cbb90862a9a9db86b) is 4.7M, max 37.8M, 33.1M free. Jan 23 20:18:46.992730 systemd-modules-load[212]: Inserted module 'overlay' Jan 23 20:18:47.031851 systemd-modules-load[212]: Inserted module 'br_netfilter' Jan 23 20:18:47.098416 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 20:18:47.099760 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 20:18:47.100795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:18:47.106393 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 20:18:47.109231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:18:47.113296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 20:18:47.116242 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 20:18:47.135850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 20:18:47.140125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:18:47.144152 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 20:18:47.151264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 20:18:47.155408 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 20:18:47.157470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 20:18:47.162232 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 20:18:47.192762 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 20:18:47.213639 systemd-resolved[250]: Positive Trust Anchors: Jan 23 20:18:47.213677 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 20:18:47.213723 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 20:18:47.218354 systemd-resolved[250]: Defaulting to hostname 'linux'. Jan 23 20:18:47.220580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 20:18:47.222054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 20:18:47.315131 kernel: SCSI subsystem initialized Jan 23 20:18:47.327115 kernel: Loading iSCSI transport class v2.0-870. Jan 23 20:18:47.341203 kernel: iscsi: registered transport (tcp) Jan 23 20:18:47.367713 kernel: iscsi: registered transport (qla4xxx) Jan 23 20:18:47.367802 kernel: QLogic iSCSI HBA Driver Jan 23 20:18:47.394020 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 20:18:47.412785 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 20:18:47.414280 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 20:18:47.482195 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 20:18:47.486326 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 20:18:47.550133 kernel: raid6: sse2x4 gen() 7639 MB/s Jan 23 20:18:47.568124 kernel: raid6: sse2x2 gen() 5330 MB/s Jan 23 20:18:47.586846 kernel: raid6: sse2x1 gen() 5182 MB/s Jan 23 20:18:47.586887 kernel: raid6: using algorithm sse2x4 gen() 7639 MB/s Jan 23 20:18:47.605794 kernel: raid6: .... xor() 4957 MB/s, rmw enabled Jan 23 20:18:47.605904 kernel: raid6: using ssse3x2 recovery algorithm Jan 23 20:18:47.632125 kernel: xor: automatically using best checksumming function avx Jan 23 20:18:47.826133 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 20:18:47.835531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 20:18:47.839047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 20:18:47.872291 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 23 20:18:47.881711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 20:18:47.887809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 20:18:47.924858 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 23 20:18:47.959841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 20:18:47.962770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 20:18:48.093707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 20:18:48.098694 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 20:18:48.230112 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 23 20:18:48.248583 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 20:18:48.248659 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 23 20:18:48.281237 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 20:18:48.281319 kernel: GPT:17805311 != 125829119 Jan 23 20:18:48.281351 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 20:18:48.281381 kernel: GPT:17805311 != 125829119 Jan 23 20:18:48.281398 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 20:18:48.281427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 20:18:48.299132 kernel: ACPI: bus type USB registered Jan 23 20:18:48.299914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 20:18:48.302445 kernel: usbcore: registered new interface driver usbfs Jan 23 20:18:48.300139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:18:48.304064 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:18:48.308548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:18:48.310666 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 20:18:48.319112 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 20:18:48.322115 kernel: AES CTR mode by8 optimization enabled Jan 23 20:18:48.325125 kernel: usbcore: registered new interface driver hub Jan 23 20:18:48.327111 kernel: libata version 3.00 loaded. Jan 23 20:18:48.330112 kernel: usbcore: registered new device driver usb Jan 23 20:18:48.378609 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 20:18:48.381174 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 20:18:48.381428 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 23 20:18:48.381634 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 20:18:48.386517 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 20:18:48.391050 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 20:18:48.391325 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 20:18:48.391532 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 23 20:18:48.391745 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 20:18:48.393566 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 23 20:18:48.393783 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 20:18:48.403116 kernel: hub 1-0:1.0: USB hub found Jan 23 20:18:48.403437 kernel: hub 1-0:1.0: 4 ports detected Jan 23 20:18:48.407116 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 20:18:48.408155 kernel: hub 2-0:1.0: USB hub found Jan 23 20:18:48.408424 kernel: hub 2-0:1.0: 4 ports detected Jan 23 20:18:48.461148 kernel: scsi host0: ahci Jan 23 20:18:48.461855 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 20:18:48.527255 kernel: scsi host1: ahci Jan 23 20:18:48.527541 kernel: scsi host2: ahci Jan 23 20:18:48.527751 kernel: scsi host3: ahci Jan 23 20:18:48.527957 kernel: scsi host4: ahci Jan 23 20:18:48.528396 kernel: scsi host5: ahci Jan 23 20:18:48.528609 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Jan 23 20:18:48.528632 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Jan 23 20:18:48.528650 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Jan 23 20:18:48.528667 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Jan 23 20:18:48.528693 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Jan 23 20:18:48.528711 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Jan 23 20:18:48.527253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:18:48.557972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 20:18:48.568739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 20:18:48.569603 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 20:18:48.583604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 20:18:48.585800 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 20:18:48.623864 disk-uuid[610]: Primary Header is updated. Jan 23 20:18:48.623864 disk-uuid[610]: Secondary Entries is updated. Jan 23 20:18:48.623864 disk-uuid[610]: Secondary Header is updated. Jan 23 20:18:48.631276 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 20:18:48.646164 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 20:18:48.788118 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 20:18:48.788214 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 20:18:48.793140 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 20:18:48.793179 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 20:18:48.793199 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 20:18:48.797818 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 20:18:48.797855 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 20:18:48.814665 kernel: usbcore: registered new interface driver usbhid Jan 23 20:18:48.814763 kernel: usbhid: USB HID core driver Jan 23 20:18:48.825977 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 20:18:48.826056 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 23 20:18:48.841904 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 20:18:48.843388 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 20:18:48.844498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 20:18:48.846413 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 20:18:48.849288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 20:18:48.877491 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 20:18:49.641824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 20:18:49.644288 disk-uuid[611]: The operation has completed successfully. Jan 23 20:18:49.704948 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 20:18:49.705143 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 20:18:49.754840 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 20:18:49.772625 sh[637]: Success Jan 23 20:18:49.798504 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 20:18:49.798605 kernel: device-mapper: uevent: version 1.0.3 Jan 23 20:18:49.801220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 20:18:49.817162 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jan 23 20:18:49.873640 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 20:18:49.879186 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 20:18:49.897784 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 20:18:49.911119 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (649) Jan 23 20:18:49.914522 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 20:18:49.914573 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:18:49.926757 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 20:18:49.926811 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 20:18:49.929170 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 20:18:49.930555 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 20:18:49.931450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 20:18:49.932676 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 20:18:49.938259 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 20:18:49.969839 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (680) Jan 23 20:18:49.969956 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:18:49.975152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:18:49.982241 kernel: BTRFS info (device vda6): turning on async discard Jan 23 20:18:49.982326 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 20:18:49.992139 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:18:49.993718 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 20:18:49.999154 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 20:18:50.098311 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 20:18:50.104299 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 20:18:50.164996 systemd-networkd[820]: lo: Link UP Jan 23 20:18:50.166333 systemd-networkd[820]: lo: Gained carrier Jan 23 20:18:50.168592 systemd-networkd[820]: Enumeration completed Jan 23 20:18:50.169830 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 20:18:50.171786 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:18:50.171793 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 20:18:50.173465 systemd[1]: Reached target network.target - Network. Jan 23 20:18:50.175348 systemd-networkd[820]: eth0: Link UP Jan 23 20:18:50.175608 systemd-networkd[820]: eth0: Gained carrier Jan 23 20:18:50.175736 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:18:50.207245 systemd-networkd[820]: eth0: DHCPv4 address 10.244.9.250/30, gateway 10.244.9.249 acquired from 10.244.9.249 Jan 23 20:18:50.223492 ignition[731]: Ignition 2.22.0 Jan 23 20:18:50.223525 ignition[731]: Stage: fetch-offline Jan 23 20:18:50.223637 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:18:50.223656 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:18:50.223822 ignition[731]: parsed url from cmdline: "" Jan 23 20:18:50.227684 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 20:18:50.223830 ignition[731]: no config URL provided Jan 23 20:18:50.223845 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 20:18:50.223861 ignition[731]: no config at "/usr/lib/ignition/user.ign" Jan 23 20:18:50.223877 ignition[731]: failed to fetch config: resource requires networking Jan 23 20:18:50.224405 ignition[731]: Ignition finished successfully Jan 23 20:18:50.233288 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 20:18:50.277451 ignition[830]: Ignition 2.22.0 Jan 23 20:18:50.277477 ignition[830]: Stage: fetch Jan 23 20:18:50.277707 ignition[830]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:18:50.277727 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:18:50.277925 ignition[830]: parsed url from cmdline: "" Jan 23 20:18:50.277933 ignition[830]: no config URL provided Jan 23 20:18:50.277944 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 20:18:50.277960 ignition[830]: no config at "/usr/lib/ignition/user.ign" Jan 23 20:18:50.278174 ignition[830]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 20:18:50.278580 ignition[830]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 20:18:50.278624 ignition[830]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 20:18:50.298591 ignition[830]: GET result: OK Jan 23 20:18:50.298821 ignition[830]: parsing config with SHA512: ffa9f8df7f0ee9d0347b719bc032611eea518a984af45e864adc5400bde9a381d893ba58528480a2e38d2d157b75ef9d0849d9add4ba15deb4dfe7c9e9c2a74a Jan 23 20:18:50.306574 unknown[830]: fetched base config from "system" Jan 23 20:18:50.306595 unknown[830]: fetched base config from "system" Jan 23 20:18:50.307711 ignition[830]: fetch: fetch complete Jan 23 20:18:50.306604 unknown[830]: fetched user config from "openstack" Jan 23 20:18:50.307721 ignition[830]: fetch: fetch passed Jan 23 20:18:50.310931 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 20:18:50.307795 ignition[830]: Ignition finished successfully Jan 23 20:18:50.314291 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 20:18:50.359552 ignition[836]: Ignition 2.22.0 Jan 23 20:18:50.359596 ignition[836]: Stage: kargs Jan 23 20:18:50.359786 ignition[836]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:18:50.359805 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:18:50.361419 ignition[836]: kargs: kargs passed Jan 23 20:18:50.361490 ignition[836]: Ignition finished successfully Jan 23 20:18:50.365759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 20:18:50.369249 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 20:18:50.405250 ignition[842]: Ignition 2.22.0 Jan 23 20:18:50.405277 ignition[842]: Stage: disks Jan 23 20:18:50.405483 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:18:50.405504 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:18:50.406463 ignition[842]: disks: disks passed Jan 23 20:18:50.408511 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 20:18:50.406534 ignition[842]: Ignition finished successfully Jan 23 20:18:50.410351 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 20:18:50.411376 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 20:18:50.412594 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 20:18:50.413991 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 20:18:50.415556 systemd[1]: Reached target basic.target - Basic System. Jan 23 20:18:50.418346 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 20:18:50.468731 systemd-fsck[850]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 20:18:50.473616 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 20:18:50.476206 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 20:18:50.627154 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 20:18:50.627979 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 20:18:50.629370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 20:18:50.632196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 20:18:50.635178 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 20:18:50.636353 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 20:18:50.638363 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 20:18:50.641498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 20:18:50.641550 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 20:18:50.656266 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 20:18:50.667161 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Jan 23 20:18:50.659755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 20:18:50.675121 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:18:50.679147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:18:50.691105 kernel: BTRFS info (device vda6): turning on async discard Jan 23 20:18:50.691183 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 20:18:50.695059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 20:18:50.747127 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:18:50.767909 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 20:18:50.777946 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory Jan 23 20:18:50.787107 initrd-setup-root[900]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 20:18:50.794443 initrd-setup-root[907]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 20:18:50.915762 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 20:18:50.920177 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 20:18:50.923262 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 20:18:50.946062 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 20:18:50.949598 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:18:50.970843 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 20:18:50.993413 ignition[975]: INFO : Ignition 2.22.0 Jan 23 20:18:50.993413 ignition[975]: INFO : Stage: mount Jan 23 20:18:50.995447 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 20:18:50.995447 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:18:50.995447 ignition[975]: INFO : mount: mount passed Jan 23 20:18:50.995447 ignition[975]: INFO : Ignition finished successfully Jan 23 20:18:50.997629 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 20:18:51.420446 systemd-networkd[820]: eth0: Gained IPv6LL Jan 23 20:18:51.783126 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:18:52.309987 systemd-networkd[820]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:27e:24:19ff:fef4:9fa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:27e:24:19ff:fef4:9fa/64 assigned by NDisc. Jan 23 20:18:52.310005 systemd-networkd[820]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 20:18:53.791157 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:18:57.805133 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:18:57.811885 coreos-metadata[860]: Jan 23 20:18:57.811 WARN failed to locate config-drive, using the metadata service API instead Jan 23 20:18:57.836420 coreos-metadata[860]: Jan 23 20:18:57.836 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 20:18:57.861515 coreos-metadata[860]: Jan 23 20:18:57.861 INFO Fetch successful Jan 23 20:18:57.862486 coreos-metadata[860]: Jan 23 20:18:57.862 INFO wrote hostname srv-1diuq.gb1.brightbox.com to /sysroot/etc/hostname Jan 23 20:18:57.865074 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 20:18:57.865299 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 20:18:57.869102 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 20:18:57.908287 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 20:18:57.941171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (991) Jan 23 20:18:57.945115 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:18:57.945175 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:18:57.952534 kernel: BTRFS info (device vda6): turning on async discard Jan 23 20:18:57.952609 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 20:18:57.957397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 20:18:58.005012 ignition[1008]: INFO : Ignition 2.22.0 Jan 23 20:18:58.005012 ignition[1008]: INFO : Stage: files Jan 23 20:18:58.006845 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 20:18:58.006845 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:18:58.006845 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping Jan 23 20:18:58.009681 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 20:18:58.009681 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 20:18:58.017820 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 20:18:58.017820 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 20:18:58.019823 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 20:18:58.017900 unknown[1008]: wrote ssh authorized keys file for user: core Jan 23 20:18:58.021865 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 20:18:58.021865 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 20:18:58.241531 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 20:18:58.588268 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 20:18:58.588268 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 20:18:58.588268 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 20:18:58.931170 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 20:18:59.492117 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 20:18:59.492117 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 20:18:59.492117 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 20:18:59.492117 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 20:18:59.501105 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 20:18:59.510400 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 20:18:59.510400 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 20:18:59.510400 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 20:18:59.845219 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 20:19:02.377155 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 20:19:02.380378 ignition[1008]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 20:19:02.380378 ignition[1008]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 20:19:02.385128 ignition[1008]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 20:19:02.385128 ignition[1008]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 20:19:02.388455 ignition[1008]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 20:19:02.388455 ignition[1008]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 20:19:02.388455 ignition[1008]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 20:19:02.388455 ignition[1008]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 20:19:02.388455 ignition[1008]: INFO : files: files passed Jan 23 20:19:02.388455 ignition[1008]: INFO : Ignition finished successfully Jan 23 20:19:02.391645 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 20:19:02.396185 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 20:19:02.402285 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 20:19:02.435478 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 20:19:02.435741 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 20:19:02.447803 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 20:19:02.449800 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 20:19:02.449800 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 20:19:02.450773 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 20:19:02.452263 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 20:19:02.454915 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 20:19:02.514952 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 20:19:02.515143 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 20:19:02.516664 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 20:19:02.517731 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 20:19:02.519498 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 20:19:02.520699 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 20:19:02.549920 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 20:19:02.552978 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 20:19:02.577610 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 20:19:02.578625 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 20:19:02.580281 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 20:19:02.581944 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 20:19:02.582234 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 20:19:02.583980 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 20:19:02.585016 systemd[1]: Stopped target basic.target - Basic System. Jan 23 20:19:02.586611 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 20:19:02.588262 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 20:19:02.589654 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 20:19:02.591340 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 20:19:02.592870 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 20:19:02.594583 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 20:19:02.596140 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 20:19:02.597884 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 20:19:02.599326 systemd[1]: Stopped target swap.target - Swaps. Jan 23 20:19:02.600900 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 20:19:02.601190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 20:19:02.602916 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 20:19:02.603946 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 20:19:02.605315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 20:19:02.605535 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 20:19:02.606882 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 20:19:02.607176 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 20:19:02.615057 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 20:19:02.615270 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 20:19:02.617324 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 20:19:02.617570 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 20:19:02.620345 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 20:19:02.624655 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 20:19:02.627403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 20:19:02.627673 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 20:19:02.629264 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 20:19:02.629514 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 20:19:02.638846 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 20:19:02.641162 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 20:19:02.663586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 20:19:02.667079 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 20:19:02.667914 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 20:19:02.679089 ignition[1063]: INFO : Ignition 2.22.0 Jan 23 20:19:02.679089 ignition[1063]: INFO : Stage: umount Jan 23 20:19:02.680941 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 20:19:02.680941 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:19:02.684137 ignition[1063]: INFO : umount: umount passed Jan 23 20:19:02.684137 ignition[1063]: INFO : Ignition finished successfully Jan 23 20:19:02.685617 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 20:19:02.685822 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 20:19:02.687398 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 20:19:02.687480 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 20:19:02.688748 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 20:19:02.688829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 20:19:02.690197 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 20:19:02.690278 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 20:19:02.691543 systemd[1]: Stopped target network.target - Network. Jan 23 20:19:02.692890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 20:19:02.692976 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 20:19:02.694488 systemd[1]: Stopped target paths.target - Path Units. Jan 23 20:19:02.695759 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 20:19:02.699155 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 20:19:02.700797 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 20:19:02.702323 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 20:19:02.704105 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 20:19:02.704203 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 20:19:02.705401 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 20:19:02.705478 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 20:19:02.706803 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 20:19:02.706890 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 20:19:02.708211 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 20:19:02.708276 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 20:19:02.709827 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 20:19:02.709904 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 20:19:02.711570 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 20:19:02.713536 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 20:19:02.717271 systemd-networkd[820]: eth0: DHCPv6 lease lost Jan 23 20:19:02.724180 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 20:19:02.724716 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 20:19:02.727776 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 20:19:02.728153 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 20:19:02.728347 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 20:19:02.733130 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 20:19:02.733704 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 20:19:02.735002 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 20:19:02.735071 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 20:19:02.737717 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 20:19:02.739482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 20:19:02.739556 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 20:19:02.741550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 20:19:02.741652 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:19:02.744302 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 20:19:02.744372 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 20:19:02.747170 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 20:19:02.747239 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 20:19:02.749399 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 20:19:02.753040 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 20:19:02.755141 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 20:19:02.758550 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 20:19:02.758828 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 20:19:02.762206 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 20:19:02.762339 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 20:19:02.764543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 20:19:02.764606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 20:19:02.766213 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 20:19:02.766304 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 20:19:02.768756 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 20:19:02.768825 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 20:19:02.770358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 20:19:02.770447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 20:19:02.773147 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 20:19:02.775442 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 20:19:02.775519 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 20:19:02.778244 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 20:19:02.778316 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 20:19:02.780266 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 20:19:02.780362 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 20:19:02.782208 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 20:19:02.782275 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 20:19:02.784467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 20:19:02.784548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:19:02.797018 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 20:19:02.797142 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 20:19:02.797223 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 20:19:02.797294 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 20:19:02.797989 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 20:19:02.798180 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 20:19:02.802895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 20:19:02.804075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 20:19:02.807827 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 20:19:02.810157 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 20:19:02.833943 systemd[1]: Switching root. Jan 23 20:19:02.878555 systemd-journald[211]: Journal stopped Jan 23 20:19:04.672801 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Jan 23 20:19:04.672923 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 20:19:04.672950 kernel: SELinux: policy capability open_perms=1 Jan 23 20:19:04.672969 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 20:19:04.672988 kernel: SELinux: policy capability always_check_network=0 Jan 23 20:19:04.673016 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 20:19:04.673037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 20:19:04.673062 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 20:19:04.673118 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 20:19:04.673156 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 20:19:04.673183 kernel: audit: type=1403 audit(1769199543.364:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 20:19:04.673204 systemd[1]: Successfully loaded SELinux policy in 79.793ms. Jan 23 20:19:04.673240 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.514ms. Jan 23 20:19:04.673263 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 20:19:04.673286 systemd[1]: Detected virtualization kvm. Jan 23 20:19:04.673306 systemd[1]: Detected architecture x86-64. Jan 23 20:19:04.673326 systemd[1]: Detected first boot. Jan 23 20:19:04.673362 systemd[1]: Hostname set to . Jan 23 20:19:04.673384 systemd[1]: Initializing machine ID from VM UUID. Jan 23 20:19:04.673419 zram_generator::config[1111]: No configuration found. Jan 23 20:19:04.673443 kernel: Guest personality initialized and is inactive Jan 23 20:19:04.673472 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 20:19:04.673491 kernel: Initialized host personality Jan 23 20:19:04.673510 kernel: NET: Registered PF_VSOCK protocol family Jan 23 20:19:04.673530 systemd[1]: Populated /etc with preset unit settings. Jan 23 20:19:04.673565 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 20:19:04.673588 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 20:19:04.673609 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 20:19:04.673630 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 20:19:04.673673 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 20:19:04.673697 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 20:19:04.673737 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 20:19:04.673765 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 20:19:04.673787 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 20:19:04.673816 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 20:19:04.673838 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 20:19:04.673859 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 20:19:04.673881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 20:19:04.673903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 20:19:04.673924 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 20:19:04.673951 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 20:19:04.673973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 20:19:04.673995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 20:19:04.674024 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 20:19:04.674046 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 20:19:04.674067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 20:19:04.674116 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 20:19:04.674139 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 20:19:04.674159 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 20:19:04.674223 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 20:19:04.674254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 20:19:04.674277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 20:19:04.674298 systemd[1]: Reached target slices.target - Slice Units. Jan 23 20:19:04.674319 systemd[1]: Reached target swap.target - Swaps. Jan 23 20:19:04.674341 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 20:19:04.674369 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 20:19:04.674391 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 20:19:04.674419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 20:19:04.674441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 20:19:04.674473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 20:19:04.674495 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 20:19:04.674517 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 20:19:04.674539 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 20:19:04.674560 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 20:19:04.674587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:04.674609 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 20:19:04.674630 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 20:19:04.674663 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 20:19:04.674687 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 20:19:04.674709 systemd[1]: Reached target machines.target - Containers. Jan 23 20:19:04.674730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 20:19:04.674751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:19:04.674778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 20:19:04.674800 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 20:19:04.674821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 20:19:04.674842 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 20:19:04.674863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 20:19:04.674884 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 20:19:04.674909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 20:19:04.674930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 20:19:04.674951 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 20:19:04.674989 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 20:19:04.675013 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 20:19:04.675034 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 20:19:04.675057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:19:04.675137 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 20:19:04.675179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 20:19:04.675215 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 20:19:04.675239 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 20:19:04.675262 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 20:19:04.675308 kernel: loop: module loaded Jan 23 20:19:04.675332 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 20:19:04.675353 kernel: fuse: init (API version 7.41) Jan 23 20:19:04.675373 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 20:19:04.675393 systemd[1]: Stopped verity-setup.service. Jan 23 20:19:04.675416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:04.675437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 20:19:04.675489 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 20:19:04.675518 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 20:19:04.675560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 20:19:04.675582 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 20:19:04.675614 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 20:19:04.675637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 20:19:04.675672 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 20:19:04.675704 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 20:19:04.675725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 20:19:04.675747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 20:19:04.675783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 20:19:04.675806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 20:19:04.675827 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 20:19:04.675861 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 20:19:04.675884 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 20:19:04.675905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 20:19:04.675925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 20:19:04.675947 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 20:19:04.675967 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 20:19:04.676000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 20:19:04.676023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:19:04.676118 systemd-journald[1190]: Collecting audit messages is disabled. Jan 23 20:19:04.676172 systemd-journald[1190]: Journal started Jan 23 20:19:04.676214 systemd-journald[1190]: Runtime Journal (/run/log/journal/9a1b87df16124c7cbb90862a9a9db86b) is 4.7M, max 37.8M, 33.1M free. Jan 23 20:19:04.208377 systemd[1]: Queued start job for default target multi-user.target. Jan 23 20:19:04.219671 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 20:19:04.220495 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 20:19:04.691893 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 20:19:04.691993 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 20:19:04.703348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 20:19:04.705368 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 20:19:04.706894 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 20:19:04.713882 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 20:19:04.715957 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 20:19:04.754511 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 20:19:04.755761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 20:19:04.755809 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 20:19:04.759003 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 20:19:04.771370 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 20:19:04.772411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:19:04.776914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 20:19:04.787074 kernel: ACPI: bus type drm_connector registered Jan 23 20:19:04.781993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 20:19:04.785228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 20:19:04.792802 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 20:19:04.797949 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 23 20:19:04.797978 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 23 20:19:04.798244 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 20:19:04.802200 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 20:19:04.807412 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 20:19:04.808599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 20:19:04.811475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:19:04.833579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 20:19:04.842458 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 20:19:04.864335 systemd-journald[1190]: Time spent on flushing to /var/log/journal/9a1b87df16124c7cbb90862a9a9db86b is 94.713ms for 1174 entries. Jan 23 20:19:04.864335 systemd-journald[1190]: System Journal (/var/log/journal/9a1b87df16124c7cbb90862a9a9db86b) is 8M, max 584.8M, 576.8M free. Jan 23 20:19:04.997457 systemd-journald[1190]: Received client request to flush runtime journal. Jan 23 20:19:04.997548 kernel: loop0: detected capacity change from 0 to 224512 Jan 23 20:19:04.997706 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 20:19:04.862488 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 20:19:04.864179 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 20:19:04.871341 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 20:19:04.930132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 20:19:05.001313 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 20:19:05.009311 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 20:19:05.020267 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 20:19:05.045186 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 20:19:05.049264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 20:19:05.074133 kernel: loop2: detected capacity change from 0 to 128560 Jan 23 20:19:05.105686 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 23 20:19:05.106208 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 23 20:19:05.115361 kernel: loop3: detected capacity change from 0 to 8 Jan 23 20:19:05.142117 kernel: loop4: detected capacity change from 0 to 224512 Jan 23 20:19:05.150169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 20:19:05.168470 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 20:19:05.194139 kernel: loop6: detected capacity change from 0 to 128560 Jan 23 20:19:05.225178 kernel: loop7: detected capacity change from 0 to 8 Jan 23 20:19:05.225284 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 20:19:05.226831 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 23 20:19:05.227725 (sd-merge)[1273]: Merged extensions into '/usr'. Jan 23 20:19:05.237292 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 20:19:05.237340 systemd[1]: Reloading... Jan 23 20:19:05.373138 zram_generator::config[1296]: No configuration found. Jan 23 20:19:05.632138 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 20:19:05.794793 systemd[1]: Reloading finished in 556 ms. Jan 23 20:19:05.812855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 20:19:05.814344 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 20:19:05.826120 systemd[1]: Starting ensure-sysext.service... Jan 23 20:19:05.828280 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 20:19:05.864342 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 20:19:05.864692 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 20:19:05.865182 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 20:19:05.865597 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 20:19:05.866288 systemd[1]: Reload requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Jan 23 20:19:05.866407 systemd[1]: Reloading... Jan 23 20:19:05.867044 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 20:19:05.867436 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 23 20:19:05.867545 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 23 20:19:05.874321 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 20:19:05.874337 systemd-tmpfiles[1357]: Skipping /boot Jan 23 20:19:05.892933 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 20:19:05.892956 systemd-tmpfiles[1357]: Skipping /boot Jan 23 20:19:05.953121 zram_generator::config[1380]: No configuration found. Jan 23 20:19:06.245815 systemd[1]: Reloading finished in 378 ms. Jan 23 20:19:06.270786 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 20:19:06.284416 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 20:19:06.295498 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 20:19:06.301422 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 20:19:06.309701 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 20:19:06.314500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 20:19:06.324539 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 20:19:06.331302 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 20:19:06.338204 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:06.338490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:19:06.344453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 20:19:06.349392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 20:19:06.352489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 20:19:06.354303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:19:06.354477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:19:06.363439 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 20:19:06.364238 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:06.370304 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:06.370581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:19:06.370846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:19:06.370975 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:19:06.371160 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:06.382359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:06.382810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:19:06.386840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 20:19:06.389314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:19:06.389652 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:19:06.389943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:19:06.397933 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 20:19:06.400883 systemd[1]: Finished ensure-sysext.service. Jan 23 20:19:06.407644 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 20:19:06.410518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 20:19:06.427788 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 20:19:06.434356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 20:19:06.436541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 20:19:06.438866 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 20:19:06.440433 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 20:19:06.441455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 20:19:06.443896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 20:19:06.445203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 20:19:06.453699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 20:19:06.455214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 20:19:06.455260 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 20:19:06.470452 systemd-udevd[1446]: Using default interface naming scheme 'v255'. Jan 23 20:19:06.482397 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 20:19:06.487144 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 20:19:06.516445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 20:19:06.519016 augenrules[1482]: No rules Jan 23 20:19:06.525204 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 20:19:06.527528 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 20:19:06.527924 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 20:19:06.529146 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 20:19:06.568363 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 20:19:06.748661 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 20:19:06.822228 systemd-networkd[1488]: lo: Link UP Jan 23 20:19:06.822242 systemd-networkd[1488]: lo: Gained carrier Jan 23 20:19:06.823624 systemd-networkd[1488]: Enumeration completed Jan 23 20:19:06.823759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 20:19:06.830376 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 20:19:06.836555 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 20:19:06.838353 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 20:19:06.843308 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 20:19:06.891921 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 20:19:06.900951 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:19:06.900966 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 20:19:06.904226 systemd-networkd[1488]: eth0: Link UP Jan 23 20:19:06.905148 systemd-networkd[1488]: eth0: Gained carrier Jan 23 20:19:06.905172 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:19:06.906857 systemd-resolved[1445]: Positive Trust Anchors: Jan 23 20:19:06.907279 systemd-resolved[1445]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 20:19:06.907442 systemd-resolved[1445]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 20:19:06.916832 systemd-resolved[1445]: Using system hostname 'srv-1diuq.gb1.brightbox.com'. Jan 23 20:19:06.919511 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 20:19:06.921471 systemd[1]: Reached target network.target - Network. Jan 23 20:19:06.922178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 20:19:06.923749 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 20:19:06.924587 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 20:19:06.925475 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 20:19:06.927296 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 20:19:06.928269 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 20:19:06.930138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 20:19:06.930923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 20:19:06.931727 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 20:19:06.931781 systemd[1]: Reached target paths.target - Path Units. Jan 23 20:19:06.932735 systemd[1]: Reached target timers.target - Timer Units. Jan 23 20:19:06.935300 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 20:19:06.939982 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 20:19:06.947158 systemd-networkd[1488]: eth0: DHCPv4 address 10.244.9.250/30, gateway 10.244.9.249 acquired from 10.244.9.249 Jan 23 20:19:06.948228 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 20:19:06.949760 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 20:19:06.950291 systemd-timesyncd[1466]: Network configuration changed, trying to establish connection. Jan 23 20:19:06.951050 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 20:19:06.961468 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 20:19:06.963789 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 20:19:06.965811 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 20:19:06.968222 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 20:19:06.969891 systemd[1]: Reached target basic.target - Basic System. Jan 23 20:19:06.970662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 20:19:06.970714 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 20:19:06.973381 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 20:19:06.978347 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 20:19:06.982982 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 20:19:06.992809 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 20:19:06.996304 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 20:19:07.002008 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 20:19:07.003203 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 20:19:07.012043 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 20:19:07.022063 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 20:19:07.022258 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:07.029416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 20:19:07.038142 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 20:19:07.045419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 20:19:07.046411 extend-filesystems[1540]: Found /dev/vda6 Jan 23 20:19:07.057053 jq[1539]: false Jan 23 20:19:07.058942 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 20:19:07.060908 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 20:19:07.062942 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 20:19:07.065939 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 20:19:07.074383 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 20:19:07.080275 systemd-timesyncd[1466]: Contacted time server 149.22.188.7:123 (0.flatcar.pool.ntp.org). Jan 23 20:19:07.080411 systemd-timesyncd[1466]: Initial clock synchronization to Fri 2026-01-23 20:19:07.153681 UTC. Jan 23 20:19:07.083821 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 20:19:07.085236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 20:19:07.085581 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 20:19:07.108818 extend-filesystems[1540]: Found /dev/vda9 Jan 23 20:19:07.117036 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 20:19:07.118468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 20:19:07.124387 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Jan 23 20:19:07.124390 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Jan 23 20:19:07.131695 extend-filesystems[1540]: Checking size of /dev/vda9 Jan 23 20:19:07.148475 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 20:19:07.149070 update_engine[1554]: I20260123 20:19:07.148967 1554 main.cc:92] Flatcar Update Engine starting Jan 23 20:19:07.150180 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 20:19:07.162351 extend-filesystems[1540]: Resized partition /dev/vda9 Jan 23 20:19:07.164275 jq[1557]: true Jan 23 20:19:07.166111 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Jan 23 20:19:07.166111 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 20:19:07.166111 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Jan 23 20:19:07.166370 extend-filesystems[1578]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 20:19:07.165545 oslogin_cache_refresh[1541]: Failure getting users, quitting Jan 23 20:19:07.165582 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 20:19:07.165668 oslogin_cache_refresh[1541]: Refreshing group entry cache Jan 23 20:19:07.176296 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 23 20:19:07.173222 oslogin_cache_refresh[1541]: Failure getting groups, quitting Jan 23 20:19:07.174751 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 20:19:07.176568 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Jan 23 20:19:07.176568 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 20:19:07.173241 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 20:19:07.175972 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 20:19:07.187647 tar[1562]: linux-amd64/LICENSE Jan 23 20:19:07.188215 tar[1562]: linux-amd64/helm Jan 23 20:19:07.207445 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 20:19:07.257964 jq[1580]: true Jan 23 20:19:07.265625 dbus-daemon[1537]: [system] SELinux support is enabled Jan 23 20:19:07.265900 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 20:19:07.273683 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 20:19:07.273745 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 20:19:07.275693 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 20:19:07.275730 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 20:19:07.296778 dbus-daemon[1537]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1488 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 20:19:07.303911 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 20:19:07.304882 update_engine[1554]: I20260123 20:19:07.303277 1554 update_check_scheduler.cc:74] Next update check in 5m7s Jan 23 20:19:07.304786 systemd[1]: Started update-engine.service - Update Engine. Jan 23 20:19:07.328923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 20:19:07.396208 systemd-logind[1552]: New seat seat0. Jan 23 20:19:07.400146 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 20:19:07.483634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 20:19:07.490128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 20:19:07.514475 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Jan 23 20:19:07.524129 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 20:19:07.533192 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 20:19:07.543651 systemd[1]: Starting sshkeys.service... Jan 23 20:19:07.546117 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 23 20:19:07.589789 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 20:19:07.589789 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 23 20:19:07.589789 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 23 20:19:07.587626 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 20:19:07.601384 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Jan 23 20:19:07.587976 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 20:19:07.598157 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 20:19:07.607327 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 20:19:07.642123 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 20:19:07.646708 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 20:19:07.675775 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 20:19:07.679983 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 20:19:07.682268 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 20:19:07.682898 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 20:19:07.698047 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 20:19:07.723958 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:07.735280 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 20:19:07.776308 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 20:19:07.786675 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 20:19:07.797019 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 20:19:07.794910 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 20:19:07.796830 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 20:19:07.809135 kernel: ACPI: button: Power Button [PWRF] Jan 23 20:19:07.809602 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 20:19:07.818623 dbus-daemon[1537]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 20:19:07.824314 dbus-daemon[1537]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1586 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 20:19:07.831805 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 20:19:07.882398 containerd[1573]: time="2026-01-23T20:19:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 20:19:07.885002 containerd[1573]: time="2026-01-23T20:19:07.884411010Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 20:19:07.924708 containerd[1573]: time="2026-01-23T20:19:07.924632170Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="30.493µs" Jan 23 20:19:07.924708 containerd[1573]: time="2026-01-23T20:19:07.924701218Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 20:19:07.924853 containerd[1573]: time="2026-01-23T20:19:07.924733658Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 20:19:07.925105 containerd[1573]: time="2026-01-23T20:19:07.925059138Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 20:19:07.927816 containerd[1573]: time="2026-01-23T20:19:07.927783312Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 20:19:07.927884 containerd[1573]: time="2026-01-23T20:19:07.927860279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 20:19:07.928013 containerd[1573]: time="2026-01-23T20:19:07.927981641Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 20:19:07.928110 containerd[1573]: time="2026-01-23T20:19:07.928020737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930325 containerd[1573]: time="2026-01-23T20:19:07.930223853Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930325 containerd[1573]: time="2026-01-23T20:19:07.930263625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930325 containerd[1573]: time="2026-01-23T20:19:07.930283813Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930325 containerd[1573]: time="2026-01-23T20:19:07.930299880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930513 containerd[1573]: time="2026-01-23T20:19:07.930433851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930854 containerd[1573]: time="2026-01-23T20:19:07.930786221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930932 containerd[1573]: time="2026-01-23T20:19:07.930849473Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 20:19:07.930932 containerd[1573]: time="2026-01-23T20:19:07.930869342Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 20:19:07.934174 containerd[1573]: time="2026-01-23T20:19:07.934141844Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 20:19:07.934484 containerd[1573]: time="2026-01-23T20:19:07.934459022Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 20:19:07.934654 containerd[1573]: time="2026-01-23T20:19:07.934555295Z" level=info msg="metadata content store policy set" policy=shared Jan 23 20:19:07.942390 containerd[1573]: time="2026-01-23T20:19:07.942138742Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 20:19:07.942390 containerd[1573]: time="2026-01-23T20:19:07.942242561Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 20:19:07.942390 containerd[1573]: time="2026-01-23T20:19:07.942321129Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 20:19:07.942390 containerd[1573]: time="2026-01-23T20:19:07.942356997Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942390940Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942437662Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942475357Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942497039Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942526718Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942546858Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942562405Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 20:19:07.942607 containerd[1573]: time="2026-01-23T20:19:07.942604750Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 20:19:07.942900 containerd[1573]: time="2026-01-23T20:19:07.942787420Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 20:19:07.942900 containerd[1573]: time="2026-01-23T20:19:07.942825878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 20:19:07.942900 containerd[1573]: time="2026-01-23T20:19:07.942849068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 20:19:07.942900 containerd[1573]: time="2026-01-23T20:19:07.942869675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 20:19:07.942900 containerd[1573]: time="2026-01-23T20:19:07.942888938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 20:19:07.943109 containerd[1573]: time="2026-01-23T20:19:07.942906520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 20:19:07.943109 containerd[1573]: time="2026-01-23T20:19:07.942924152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 20:19:07.943109 containerd[1573]: time="2026-01-23T20:19:07.942941107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 20:19:07.943109 containerd[1573]: time="2026-01-23T20:19:07.942966899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 20:19:07.943109 containerd[1573]: time="2026-01-23T20:19:07.942986107Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 20:19:07.943109 containerd[1573]: time="2026-01-23T20:19:07.943002547Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 20:19:07.945219 containerd[1573]: time="2026-01-23T20:19:07.945163127Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 20:19:07.945219 containerd[1573]: time="2026-01-23T20:19:07.945204176Z" level=info msg="Start snapshots syncer" Jan 23 20:19:07.945316 containerd[1573]: time="2026-01-23T20:19:07.945270499Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 20:19:07.947760 containerd[1573]: time="2026-01-23T20:19:07.947254237Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 20:19:07.947760 containerd[1573]: time="2026-01-23T20:19:07.947354869Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 20:19:07.951325 containerd[1573]: time="2026-01-23T20:19:07.951160063Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 20:19:07.951515 containerd[1573]: time="2026-01-23T20:19:07.951468761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 20:19:07.951580 containerd[1573]: time="2026-01-23T20:19:07.951517230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 20:19:07.951580 containerd[1573]: time="2026-01-23T20:19:07.951546748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 20:19:07.951580 containerd[1573]: time="2026-01-23T20:19:07.951564528Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 20:19:07.951733 containerd[1573]: time="2026-01-23T20:19:07.951601584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 20:19:07.951733 containerd[1573]: time="2026-01-23T20:19:07.951624984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 20:19:07.951733 containerd[1573]: time="2026-01-23T20:19:07.951643460Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 20:19:07.951733 containerd[1573]: time="2026-01-23T20:19:07.951675192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 20:19:07.951733 containerd[1573]: time="2026-01-23T20:19:07.951693902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 20:19:07.951733 containerd[1573]: time="2026-01-23T20:19:07.951712533Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954421363Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954549543Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954571386Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954615069Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954632854Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954650011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954677583Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954714200Z" level=info msg="runtime interface created" Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954726177Z" level=info msg="created NRI interface" Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954740343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954761836Z" level=info msg="Connect containerd service" Jan 23 20:19:07.955061 containerd[1573]: time="2026-01-23T20:19:07.954792238Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 20:19:07.961065 containerd[1573]: time="2026-01-23T20:19:07.959882743Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 20:19:08.089157 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 20:19:08.096206 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 20:19:08.107712 polkitd[1637]: Started polkitd version 126 Jan 23 20:19:08.126679 polkitd[1637]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 20:19:08.129285 polkitd[1637]: Loading rules from directory /run/polkit-1/rules.d Jan 23 20:19:08.129374 polkitd[1637]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 20:19:08.130003 polkitd[1637]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 20:19:08.130055 polkitd[1637]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 20:19:08.130141 polkitd[1637]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 20:19:08.134915 polkitd[1637]: Finished loading, compiling and executing 2 rules Jan 23 20:19:08.135797 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 20:19:08.139768 dbus-daemon[1537]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 20:19:08.141969 polkitd[1637]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 20:19:08.143128 containerd[1573]: time="2026-01-23T20:19:08.143055192Z" level=info msg="Start subscribing containerd event" Jan 23 20:19:08.143411 containerd[1573]: time="2026-01-23T20:19:08.143340043Z" level=info msg="Start recovering state" Jan 23 20:19:08.143691 containerd[1573]: time="2026-01-23T20:19:08.143666466Z" level=info msg="Start event monitor" Jan 23 20:19:08.143803 containerd[1573]: time="2026-01-23T20:19:08.143779453Z" level=info msg="Start cni network conf syncer for default" Jan 23 20:19:08.144596 containerd[1573]: time="2026-01-23T20:19:08.144195275Z" level=info msg="Start streaming server" Jan 23 20:19:08.144596 containerd[1573]: time="2026-01-23T20:19:08.144242765Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 20:19:08.144596 containerd[1573]: time="2026-01-23T20:19:08.144263396Z" level=info msg="runtime interface starting up..." Jan 23 20:19:08.144596 containerd[1573]: time="2026-01-23T20:19:08.144279400Z" level=info msg="starting plugins..." Jan 23 20:19:08.144596 containerd[1573]: time="2026-01-23T20:19:08.144319589Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 20:19:08.147663 containerd[1573]: time="2026-01-23T20:19:08.147540063Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 20:19:08.147793 containerd[1573]: time="2026-01-23T20:19:08.147756124Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 20:19:08.148584 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 20:19:08.159601 containerd[1573]: time="2026-01-23T20:19:08.149152427Z" level=info msg="containerd successfully booted in 0.270867s" Jan 23 20:19:08.182535 systemd-hostnamed[1586]: Hostname set to (static) Jan 23 20:19:08.373607 tar[1562]: linux-amd64/README.md Jan 23 20:19:08.412841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:19:08.435651 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 20:19:08.444368 systemd-networkd[1488]: eth0: Gained IPv6LL Jan 23 20:19:08.454556 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 20:19:08.456587 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 20:19:08.462423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:08.467612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 20:19:08.521161 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 20:19:08.565842 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 20:19:08.582019 systemd-logind[1552]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 20:19:08.993474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:19:09.727883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:09.747701 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:19:09.947968 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:09.948121 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:09.961426 systemd-networkd[1488]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:27e:24:19ff:fef4:9fa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:27e:24:19ff:fef4:9fa/64 assigned by NDisc. Jan 23 20:19:09.961439 systemd-networkd[1488]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 20:19:10.390235 kubelet[1703]: E0123 20:19:10.390131 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:19:10.393433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:19:10.393688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:19:10.394625 systemd[1]: kubelet.service: Consumed 1.141s CPU time, 265.2M memory peak. Jan 23 20:19:11.557830 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 20:19:11.560686 systemd[1]: Started sshd@0-10.244.9.250:22-68.220.241.50:60050.service - OpenSSH per-connection server daemon (68.220.241.50:60050). Jan 23 20:19:11.966144 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:11.969769 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:12.162465 sshd[1713]: Accepted publickey for core from 68.220.241.50 port 60050 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:12.164986 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:12.187040 systemd-logind[1552]: New session 1 of user core. Jan 23 20:19:12.189957 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 20:19:12.192646 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 20:19:12.225637 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 20:19:12.231263 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 20:19:12.328556 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 20:19:12.334284 systemd-logind[1552]: New session c1 of user core. Jan 23 20:19:12.532896 systemd[1720]: Queued start job for default target default.target. Jan 23 20:19:12.541171 systemd[1720]: Created slice app.slice - User Application Slice. Jan 23 20:19:12.541221 systemd[1720]: Reached target paths.target - Paths. Jan 23 20:19:12.541309 systemd[1720]: Reached target timers.target - Timers. Jan 23 20:19:12.543473 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 20:19:12.570286 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 20:19:12.570494 systemd[1720]: Reached target sockets.target - Sockets. Jan 23 20:19:12.570578 systemd[1720]: Reached target basic.target - Basic System. Jan 23 20:19:12.570658 systemd[1720]: Reached target default.target - Main User Target. Jan 23 20:19:12.570723 systemd[1720]: Startup finished in 226ms. Jan 23 20:19:12.571009 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 20:19:12.587544 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 20:19:12.904078 login[1636]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 20:19:12.915310 systemd-logind[1552]: New session 2 of user core. Jan 23 20:19:12.921507 login[1635]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 20:19:12.922550 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 20:19:12.936929 systemd-logind[1552]: New session 3 of user core. Jan 23 20:19:12.945449 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 20:19:13.011817 systemd[1]: Started sshd@1-10.244.9.250:22-68.220.241.50:40526.service - OpenSSH per-connection server daemon (68.220.241.50:40526). Jan 23 20:19:13.610412 sshd[1756]: Accepted publickey for core from 68.220.241.50 port 40526 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:13.612869 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:13.626185 systemd-logind[1552]: New session 4 of user core. Jan 23 20:19:13.630377 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 20:19:14.021242 sshd[1759]: Connection closed by 68.220.241.50 port 40526 Jan 23 20:19:14.020238 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:14.027963 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Jan 23 20:19:14.028340 systemd[1]: sshd@1-10.244.9.250:22-68.220.241.50:40526.service: Deactivated successfully. Jan 23 20:19:14.030804 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 20:19:14.033326 systemd-logind[1552]: Removed session 4. Jan 23 20:19:14.119286 systemd[1]: Started sshd@2-10.244.9.250:22-68.220.241.50:40534.service - OpenSSH per-connection server daemon (68.220.241.50:40534). Jan 23 20:19:14.703115 sshd[1765]: Accepted publickey for core from 68.220.241.50 port 40534 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:14.706452 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:14.716154 systemd-logind[1552]: New session 5 of user core. Jan 23 20:19:14.723429 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 20:19:15.105153 sshd[1768]: Connection closed by 68.220.241.50 port 40534 Jan 23 20:19:15.106411 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:15.112176 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Jan 23 20:19:15.112783 systemd[1]: sshd@2-10.244.9.250:22-68.220.241.50:40534.service: Deactivated successfully. Jan 23 20:19:15.115388 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 20:19:15.117902 systemd-logind[1552]: Removed session 5. Jan 23 20:19:15.991896 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:15.992328 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:19:16.006761 coreos-metadata[1628]: Jan 23 20:19:16.006 WARN failed to locate config-drive, using the metadata service API instead Jan 23 20:19:16.012111 coreos-metadata[1536]: Jan 23 20:19:16.010 WARN failed to locate config-drive, using the metadata service API instead Jan 23 20:19:16.035994 coreos-metadata[1628]: Jan 23 20:19:16.035 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 20:19:16.036941 coreos-metadata[1536]: Jan 23 20:19:16.036 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 20:19:16.043228 coreos-metadata[1536]: Jan 23 20:19:16.043 INFO Fetch failed with 404: resource not found Jan 23 20:19:16.043417 coreos-metadata[1536]: Jan 23 20:19:16.043 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 20:19:16.043907 coreos-metadata[1536]: Jan 23 20:19:16.043 INFO Fetch successful Jan 23 20:19:16.044129 coreos-metadata[1536]: Jan 23 20:19:16.043 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 20:19:16.058523 coreos-metadata[1536]: Jan 23 20:19:16.058 INFO Fetch successful Jan 23 20:19:16.058755 coreos-metadata[1536]: Jan 23 20:19:16.058 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 20:19:16.061216 coreos-metadata[1628]: Jan 23 20:19:16.060 INFO Fetch successful Jan 23 20:19:16.061216 coreos-metadata[1628]: Jan 23 20:19:16.061 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 20:19:16.073588 coreos-metadata[1536]: Jan 23 20:19:16.073 INFO Fetch successful Jan 23 20:19:16.073588 coreos-metadata[1536]: Jan 23 20:19:16.073 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 20:19:16.087790 coreos-metadata[1628]: Jan 23 20:19:16.087 INFO Fetch successful Jan 23 20:19:16.088807 coreos-metadata[1536]: Jan 23 20:19:16.088 INFO Fetch successful Jan 23 20:19:16.089044 coreos-metadata[1536]: Jan 23 20:19:16.089 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 20:19:16.102185 unknown[1628]: wrote ssh authorized keys file for user: core Jan 23 20:19:16.108983 coreos-metadata[1536]: Jan 23 20:19:16.108 INFO Fetch successful Jan 23 20:19:16.147189 update-ssh-keys[1777]: Updated "/home/core/.ssh/authorized_keys" Jan 23 20:19:16.148822 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 20:19:16.151400 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 20:19:16.154185 systemd[1]: Finished sshkeys.service. Jan 23 20:19:16.158295 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 20:19:16.158709 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 20:19:16.159137 systemd[1]: Startup finished in 3.591s (kernel) + 16.687s (initrd) + 12.871s (userspace) = 33.149s. Jan 23 20:19:20.645029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 20:19:20.648728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:20.866186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:20.878949 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:19:20.944146 kubelet[1794]: E0123 20:19:20.943895 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:19:20.947347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:19:20.947613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:19:20.948337 systemd[1]: kubelet.service: Consumed 248ms CPU time, 110.3M memory peak. Jan 23 20:19:25.232273 systemd[1]: Started sshd@3-10.244.9.250:22-68.220.241.50:34826.service - OpenSSH per-connection server daemon (68.220.241.50:34826). Jan 23 20:19:25.829597 sshd[1802]: Accepted publickey for core from 68.220.241.50 port 34826 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:25.831604 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:25.839636 systemd-logind[1552]: New session 6 of user core. Jan 23 20:19:25.852364 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 20:19:26.230375 sshd[1805]: Connection closed by 68.220.241.50 port 34826 Jan 23 20:19:26.231488 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:26.237413 systemd[1]: sshd@3-10.244.9.250:22-68.220.241.50:34826.service: Deactivated successfully. Jan 23 20:19:26.240315 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 20:19:26.242449 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Jan 23 20:19:26.244338 systemd-logind[1552]: Removed session 6. Jan 23 20:19:26.336078 systemd[1]: Started sshd@4-10.244.9.250:22-68.220.241.50:34840.service - OpenSSH per-connection server daemon (68.220.241.50:34840). Jan 23 20:19:26.938233 sshd[1811]: Accepted publickey for core from 68.220.241.50 port 34840 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:26.940360 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:26.950170 systemd-logind[1552]: New session 7 of user core. Jan 23 20:19:26.958427 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 20:19:27.334771 sshd[1814]: Connection closed by 68.220.241.50 port 34840 Jan 23 20:19:27.335908 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:27.341484 systemd[1]: sshd@4-10.244.9.250:22-68.220.241.50:34840.service: Deactivated successfully. Jan 23 20:19:27.344388 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 20:19:27.346175 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Jan 23 20:19:27.347770 systemd-logind[1552]: Removed session 7. Jan 23 20:19:27.437550 systemd[1]: Started sshd@5-10.244.9.250:22-68.220.241.50:34856.service - OpenSSH per-connection server daemon (68.220.241.50:34856). Jan 23 20:19:28.031013 sshd[1820]: Accepted publickey for core from 68.220.241.50 port 34856 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:28.032734 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:28.040152 systemd-logind[1552]: New session 8 of user core. Jan 23 20:19:28.046359 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 20:19:28.435787 sshd[1823]: Connection closed by 68.220.241.50 port 34856 Jan 23 20:19:28.437339 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:28.443616 systemd[1]: sshd@5-10.244.9.250:22-68.220.241.50:34856.service: Deactivated successfully. Jan 23 20:19:28.447058 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 20:19:28.448695 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Jan 23 20:19:28.451034 systemd-logind[1552]: Removed session 8. Jan 23 20:19:28.539189 systemd[1]: Started sshd@6-10.244.9.250:22-68.220.241.50:34870.service - OpenSSH per-connection server daemon (68.220.241.50:34870). Jan 23 20:19:29.119971 sshd[1829]: Accepted publickey for core from 68.220.241.50 port 34870 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:29.121684 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:29.129498 systemd-logind[1552]: New session 9 of user core. Jan 23 20:19:29.138359 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 20:19:29.496365 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 20:19:29.496809 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:19:29.518027 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 23 20:19:29.607415 sshd[1832]: Connection closed by 68.220.241.50 port 34870 Jan 23 20:19:29.608478 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:29.614842 systemd[1]: sshd@6-10.244.9.250:22-68.220.241.50:34870.service: Deactivated successfully. Jan 23 20:19:29.618338 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 20:19:29.619838 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Jan 23 20:19:29.621771 systemd-logind[1552]: Removed session 9. Jan 23 20:19:29.712892 systemd[1]: Started sshd@7-10.244.9.250:22-68.220.241.50:34884.service - OpenSSH per-connection server daemon (68.220.241.50:34884). Jan 23 20:19:30.306157 sshd[1839]: Accepted publickey for core from 68.220.241.50 port 34884 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:30.307915 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:30.315966 systemd-logind[1552]: New session 10 of user core. Jan 23 20:19:30.323448 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 20:19:30.620500 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 20:19:30.620949 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:19:30.629015 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 23 20:19:30.637422 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 20:19:30.637870 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:19:30.652906 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 20:19:30.706342 augenrules[1866]: No rules Jan 23 20:19:30.707223 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 20:19:30.707708 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 20:19:30.709052 sudo[1843]: pam_unix(sudo:session): session closed for user root Jan 23 20:19:30.800060 sshd[1842]: Connection closed by 68.220.241.50 port 34884 Jan 23 20:19:30.801016 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 23 20:19:30.806502 systemd[1]: sshd@7-10.244.9.250:22-68.220.241.50:34884.service: Deactivated successfully. Jan 23 20:19:30.809564 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 20:19:30.811111 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Jan 23 20:19:30.813621 systemd-logind[1552]: Removed session 10. Jan 23 20:19:30.902898 systemd[1]: Started sshd@8-10.244.9.250:22-68.220.241.50:34886.service - OpenSSH per-connection server daemon (68.220.241.50:34886). Jan 23 20:19:31.198293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 20:19:31.201758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:31.405284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:31.419882 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:19:31.488238 sshd[1875]: Accepted publickey for core from 68.220.241.50 port 34886 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:19:31.489921 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:19:31.499165 systemd-logind[1552]: New session 11 of user core. Jan 23 20:19:31.505957 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 20:19:31.511963 kubelet[1885]: E0123 20:19:31.511921 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:19:31.515012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:19:31.515308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:19:31.515751 systemd[1]: kubelet.service: Consumed 224ms CPU time, 111M memory peak. Jan 23 20:19:31.803785 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 20:19:31.804979 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:19:32.342602 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 20:19:32.357817 (dockerd)[1911]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 20:19:32.742431 dockerd[1911]: time="2026-01-23T20:19:32.741834285Z" level=info msg="Starting up" Jan 23 20:19:32.745118 dockerd[1911]: time="2026-01-23T20:19:32.744800056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 20:19:32.761619 dockerd[1911]: time="2026-01-23T20:19:32.761467748Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 20:19:32.785564 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2439487240-merged.mount: Deactivated successfully. Jan 23 20:19:32.820912 dockerd[1911]: time="2026-01-23T20:19:32.820848722Z" level=info msg="Loading containers: start." Jan 23 20:19:32.837119 kernel: Initializing XFRM netlink socket Jan 23 20:19:33.194664 systemd-networkd[1488]: docker0: Link UP Jan 23 20:19:33.199291 dockerd[1911]: time="2026-01-23T20:19:33.199231861Z" level=info msg="Loading containers: done." Jan 23 20:19:33.223293 dockerd[1911]: time="2026-01-23T20:19:33.223213902Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 20:19:33.223496 dockerd[1911]: time="2026-01-23T20:19:33.223336420Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 20:19:33.223496 dockerd[1911]: time="2026-01-23T20:19:33.223474708Z" level=info msg="Initializing buildkit" Jan 23 20:19:33.255470 dockerd[1911]: time="2026-01-23T20:19:33.255407080Z" level=info msg="Completed buildkit initialization" Jan 23 20:19:33.262041 dockerd[1911]: time="2026-01-23T20:19:33.261993973Z" level=info msg="Daemon has completed initialization" Jan 23 20:19:33.263073 dockerd[1911]: time="2026-01-23T20:19:33.262072596Z" level=info msg="API listen on /run/docker.sock" Jan 23 20:19:33.262465 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 20:19:33.780304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2432882549-merged.mount: Deactivated successfully. Jan 23 20:19:34.381194 containerd[1573]: time="2026-01-23T20:19:34.381101884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 20:19:35.156667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401679489.mount: Deactivated successfully. Jan 23 20:19:36.947892 containerd[1573]: time="2026-01-23T20:19:36.947828459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:36.964843 containerd[1573]: time="2026-01-23T20:19:36.964784674Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 23 20:19:36.966666 containerd[1573]: time="2026-01-23T20:19:36.966559657Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:36.972141 containerd[1573]: time="2026-01-23T20:19:36.971675530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:36.973423 containerd[1573]: time="2026-01-23T20:19:36.973169647Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.591955449s" Jan 23 20:19:36.973423 containerd[1573]: time="2026-01-23T20:19:36.973226402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 20:19:36.975727 containerd[1573]: time="2026-01-23T20:19:36.975682353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 20:19:39.044978 containerd[1573]: time="2026-01-23T20:19:39.044066358Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 23 20:19:39.044978 containerd[1573]: time="2026-01-23T20:19:39.044766050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:39.047568 containerd[1573]: time="2026-01-23T20:19:39.047531005Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:39.048734 containerd[1573]: time="2026-01-23T20:19:39.048696452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:39.050057 containerd[1573]: time="2026-01-23T20:19:39.050017000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.074133303s" Jan 23 20:19:39.050188 containerd[1573]: time="2026-01-23T20:19:39.050060639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 20:19:39.050621 containerd[1573]: time="2026-01-23T20:19:39.050568321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 20:19:39.979063 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 20:19:40.864119 containerd[1573]: time="2026-01-23T20:19:40.863592842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:40.869191 containerd[1573]: time="2026-01-23T20:19:40.869133087Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 23 20:19:40.872550 containerd[1573]: time="2026-01-23T20:19:40.872465503Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:40.875989 containerd[1573]: time="2026-01-23T20:19:40.875930093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:40.877620 containerd[1573]: time="2026-01-23T20:19:40.877287923Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.826679222s" Jan 23 20:19:40.877620 containerd[1573]: time="2026-01-23T20:19:40.877358809Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 20:19:40.877911 containerd[1573]: time="2026-01-23T20:19:40.877875211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 20:19:41.697988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 20:19:41.701986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:41.908671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:41.920861 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:19:42.031425 kubelet[2203]: E0123 20:19:42.031258 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:19:42.036575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:19:42.036821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:19:42.037713 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.8M memory peak. Jan 23 20:19:42.647555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494235321.mount: Deactivated successfully. Jan 23 20:19:43.435080 containerd[1573]: time="2026-01-23T20:19:43.434992657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:43.437325 containerd[1573]: time="2026-01-23T20:19:43.437122994Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 23 20:19:43.441020 containerd[1573]: time="2026-01-23T20:19:43.440867836Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:43.443478 containerd[1573]: time="2026-01-23T20:19:43.443439701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:43.444648 containerd[1573]: time="2026-01-23T20:19:43.444362963Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.566436717s" Jan 23 20:19:43.444648 containerd[1573]: time="2026-01-23T20:19:43.444411754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 20:19:43.445254 containerd[1573]: time="2026-01-23T20:19:43.445224714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 20:19:43.938331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929135679.mount: Deactivated successfully. Jan 23 20:19:45.671103 containerd[1573]: time="2026-01-23T20:19:45.671000080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:45.678115 containerd[1573]: time="2026-01-23T20:19:45.677225796Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:45.678115 containerd[1573]: time="2026-01-23T20:19:45.677515926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 23 20:19:45.680840 containerd[1573]: time="2026-01-23T20:19:45.680763757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:45.682148 containerd[1573]: time="2026-01-23T20:19:45.682109394Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.236718677s" Jan 23 20:19:45.682241 containerd[1573]: time="2026-01-23T20:19:45.682152946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 20:19:45.683121 containerd[1573]: time="2026-01-23T20:19:45.683073806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 20:19:46.187209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305660211.mount: Deactivated successfully. Jan 23 20:19:46.209366 containerd[1573]: time="2026-01-23T20:19:46.209280718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 20:19:46.210409 containerd[1573]: time="2026-01-23T20:19:46.210371140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 20:19:46.213057 containerd[1573]: time="2026-01-23T20:19:46.211238500Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 20:19:46.214191 containerd[1573]: time="2026-01-23T20:19:46.214150080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 20:19:46.216327 containerd[1573]: time="2026-01-23T20:19:46.216240451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.112635ms" Jan 23 20:19:46.216327 containerd[1573]: time="2026-01-23T20:19:46.216284758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 20:19:46.217272 containerd[1573]: time="2026-01-23T20:19:46.217172561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 20:19:46.834342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530901827.mount: Deactivated successfully. Jan 23 20:19:50.964979 containerd[1573]: time="2026-01-23T20:19:50.964901816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:50.966733 containerd[1573]: time="2026-01-23T20:19:50.966687042Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 23 20:19:50.968179 containerd[1573]: time="2026-01-23T20:19:50.968129128Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:50.972073 containerd[1573]: time="2026-01-23T20:19:50.972031103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:19:50.973667 containerd[1573]: time="2026-01-23T20:19:50.973618827Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.75624584s" Jan 23 20:19:50.973748 containerd[1573]: time="2026-01-23T20:19:50.973671544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 20:19:52.053993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 20:19:52.061388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:52.261923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:52.272661 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:19:52.351900 kubelet[2356]: E0123 20:19:52.351722 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:19:52.355412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:19:52.355818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:19:52.356721 systemd[1]: kubelet.service: Consumed 232ms CPU time, 107.7M memory peak. Jan 23 20:19:53.023576 update_engine[1554]: I20260123 20:19:53.022292 1554 update_attempter.cc:509] Updating boot flags... Jan 23 20:19:54.927922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:54.928552 systemd[1]: kubelet.service: Consumed 232ms CPU time, 107.7M memory peak. Jan 23 20:19:54.932201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:54.967060 systemd[1]: Reload requested from client PID 2386 ('systemctl') (unit session-11.scope)... Jan 23 20:19:54.967133 systemd[1]: Reloading... Jan 23 20:19:55.184170 zram_generator::config[2431]: No configuration found. Jan 23 20:19:55.480322 systemd[1]: Reloading finished in 512 ms. Jan 23 20:19:55.556813 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 20:19:55.556975 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 20:19:55.557562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:55.557649 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.3M memory peak. Jan 23 20:19:55.560490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:19:55.743926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:19:55.757938 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 20:19:55.859067 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:19:55.859067 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 20:19:55.859067 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:19:55.859067 kubelet[2499]: I0123 20:19:55.858055 2499 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 20:19:56.737055 kubelet[2499]: I0123 20:19:56.736987 2499 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 20:19:56.737465 kubelet[2499]: I0123 20:19:56.737445 2499 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 20:19:56.738024 kubelet[2499]: I0123 20:19:56.737999 2499 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 20:19:56.808787 kubelet[2499]: E0123 20:19:56.808207 2499 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.9.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:56.813281 kubelet[2499]: I0123 20:19:56.813237 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 20:19:56.827607 kubelet[2499]: I0123 20:19:56.827478 2499 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 20:19:56.837841 kubelet[2499]: I0123 20:19:56.837520 2499 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 20:19:56.839756 kubelet[2499]: I0123 20:19:56.839689 2499 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 20:19:56.841204 kubelet[2499]: I0123 20:19:56.839738 2499 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-1diuq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 20:19:56.843007 kubelet[2499]: I0123 20:19:56.842924 2499 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 20:19:56.843007 kubelet[2499]: I0123 20:19:56.842958 2499 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 20:19:56.844439 kubelet[2499]: I0123 20:19:56.844394 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:19:56.852613 kubelet[2499]: I0123 20:19:56.852585 2499 kubelet.go:446] "Attempting to sync node with API server" Jan 23 20:19:56.852882 kubelet[2499]: I0123 20:19:56.852767 2499 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 20:19:56.854637 kubelet[2499]: I0123 20:19:56.854190 2499 kubelet.go:352] "Adding apiserver pod source" Jan 23 20:19:56.854637 kubelet[2499]: I0123 20:19:56.854232 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 20:19:56.860029 kubelet[2499]: W0123 20:19:56.859953 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.9.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1diuq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:56.860532 kubelet[2499]: E0123 20:19:56.860041 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.9.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1diuq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:56.861880 kubelet[2499]: I0123 20:19:56.861715 2499 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 20:19:56.865676 kubelet[2499]: I0123 20:19:56.865645 2499 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 20:19:56.866452 kubelet[2499]: W0123 20:19:56.866411 2499 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 20:19:56.868055 kubelet[2499]: I0123 20:19:56.868026 2499 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 20:19:56.868158 kubelet[2499]: I0123 20:19:56.868079 2499 server.go:1287] "Started kubelet" Jan 23 20:19:56.870304 kubelet[2499]: W0123 20:19:56.870258 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.9.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:56.870572 kubelet[2499]: E0123 20:19:56.870533 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.9.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:56.870874 kubelet[2499]: I0123 20:19:56.870798 2499 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 20:19:56.878733 kubelet[2499]: I0123 20:19:56.877998 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 20:19:56.878733 kubelet[2499]: I0123 20:19:56.878608 2499 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 20:19:56.883803 kubelet[2499]: E0123 20:19:56.880115 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.9.250:6443/api/v1/namespaces/default/events\": dial tcp 10.244.9.250:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-1diuq.gb1.brightbox.com.188d75a8e4159bf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-1diuq.gb1.brightbox.com,UID:srv-1diuq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-1diuq.gb1.brightbox.com,},FirstTimestamp:2026-01-23 20:19:56.868049909 +0000 UTC m=+1.105117917,LastTimestamp:2026-01-23 20:19:56.868049909 +0000 UTC m=+1.105117917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-1diuq.gb1.brightbox.com,}" Jan 23 20:19:56.885254 kubelet[2499]: I0123 20:19:56.885169 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 20:19:56.887189 kubelet[2499]: I0123 20:19:56.886515 2499 server.go:479] "Adding debug handlers to kubelet server" Jan 23 20:19:56.888858 kubelet[2499]: I0123 20:19:56.888292 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 20:19:56.892761 kubelet[2499]: E0123 20:19:56.891860 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-1diuq.gb1.brightbox.com\" not found" Jan 23 20:19:56.892761 kubelet[2499]: I0123 20:19:56.891931 2499 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 20:19:56.892761 kubelet[2499]: I0123 20:19:56.892218 2499 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 20:19:56.892761 kubelet[2499]: I0123 20:19:56.892342 2499 reconciler.go:26] "Reconciler: start to sync state" Jan 23 20:19:56.892998 kubelet[2499]: W0123 20:19:56.892878 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.9.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:56.892998 kubelet[2499]: E0123 20:19:56.892931 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.9.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:56.894138 kubelet[2499]: E0123 20:19:56.893266 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.9.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1diuq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.9.250:6443: connect: connection refused" interval="200ms" Jan 23 20:19:56.895673 kubelet[2499]: I0123 20:19:56.895642 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 20:19:56.899433 kubelet[2499]: E0123 20:19:56.899387 2499 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 20:19:56.902379 kubelet[2499]: I0123 20:19:56.902354 2499 factory.go:221] Registration of the containerd container factory successfully Jan 23 20:19:56.902510 kubelet[2499]: I0123 20:19:56.902491 2499 factory.go:221] Registration of the systemd container factory successfully Jan 23 20:19:56.929382 kubelet[2499]: I0123 20:19:56.929187 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 20:19:56.934312 kubelet[2499]: I0123 20:19:56.934282 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 20:19:56.939356 kubelet[2499]: I0123 20:19:56.939329 2499 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 20:19:56.939550 kubelet[2499]: I0123 20:19:56.939392 2499 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 20:19:56.939550 kubelet[2499]: I0123 20:19:56.939405 2499 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 20:19:56.939550 kubelet[2499]: E0123 20:19:56.939505 2499 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 20:19:56.942828 kubelet[2499]: I0123 20:19:56.942638 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 20:19:56.942828 kubelet[2499]: I0123 20:19:56.942777 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 20:19:56.943116 kubelet[2499]: I0123 20:19:56.943058 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:19:56.943292 kubelet[2499]: W0123 20:19:56.943159 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.9.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:56.943292 kubelet[2499]: E0123 20:19:56.943228 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.9.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:56.948198 kubelet[2499]: I0123 20:19:56.947814 2499 policy_none.go:49] "None policy: Start" Jan 23 20:19:56.948198 kubelet[2499]: I0123 20:19:56.947864 2499 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 20:19:56.948198 kubelet[2499]: I0123 20:19:56.947896 2499 state_mem.go:35] "Initializing new in-memory state store" Jan 23 20:19:56.958044 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 20:19:56.973205 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 20:19:56.978881 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 20:19:56.993134 kubelet[2499]: E0123 20:19:56.992760 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-1diuq.gb1.brightbox.com\" not found" Jan 23 20:19:56.995160 kubelet[2499]: I0123 20:19:56.995131 2499 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 20:19:56.995466 kubelet[2499]: I0123 20:19:56.995431 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 20:19:56.995537 kubelet[2499]: I0123 20:19:56.995463 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 20:19:56.996545 kubelet[2499]: I0123 20:19:56.996379 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 20:19:57.000280 kubelet[2499]: E0123 20:19:57.000248 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 20:19:57.000374 kubelet[2499]: E0123 20:19:57.000350 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-1diuq.gb1.brightbox.com\" not found" Jan 23 20:19:57.057657 systemd[1]: Created slice kubepods-burstable-podddc6a2a1a11eb4919f99ed6f5d7f705b.slice - libcontainer container kubepods-burstable-podddc6a2a1a11eb4919f99ed6f5d7f705b.slice. Jan 23 20:19:57.071606 kubelet[2499]: E0123 20:19:57.071532 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.076935 systemd[1]: Created slice kubepods-burstable-pod78bcfbbe3e9549c7a9f29ca2abd2ce30.slice - libcontainer container kubepods-burstable-pod78bcfbbe3e9549c7a9f29ca2abd2ce30.slice. Jan 23 20:19:57.080848 kubelet[2499]: E0123 20:19:57.080656 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.086116 systemd[1]: Created slice kubepods-burstable-podd14723977819c460e4eb5aa373d05fc6.slice - libcontainer container kubepods-burstable-podd14723977819c460e4eb5aa373d05fc6.slice. Jan 23 20:19:57.088553 kubelet[2499]: E0123 20:19:57.088506 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093281 kubelet[2499]: I0123 20:19:57.092944 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-kubeconfig\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093281 kubelet[2499]: I0123 20:19:57.092993 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093281 kubelet[2499]: I0123 20:19:57.093030 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddc6a2a1a11eb4919f99ed6f5d7f705b-ca-certs\") pod \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" (UID: \"ddc6a2a1a11eb4919f99ed6f5d7f705b\") " pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093281 kubelet[2499]: I0123 20:19:57.093059 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddc6a2a1a11eb4919f99ed6f5d7f705b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" (UID: \"ddc6a2a1a11eb4919f99ed6f5d7f705b\") " pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093281 kubelet[2499]: I0123 20:19:57.093112 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-k8s-certs\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093549 kubelet[2499]: I0123 20:19:57.093144 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d14723977819c460e4eb5aa373d05fc6-kubeconfig\") pod \"kube-scheduler-srv-1diuq.gb1.brightbox.com\" (UID: \"d14723977819c460e4eb5aa373d05fc6\") " pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093549 kubelet[2499]: I0123 20:19:57.093168 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddc6a2a1a11eb4919f99ed6f5d7f705b-k8s-certs\") pod \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" (UID: \"ddc6a2a1a11eb4919f99ed6f5d7f705b\") " pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093549 kubelet[2499]: I0123 20:19:57.093194 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-ca-certs\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.093549 kubelet[2499]: I0123 20:19:57.093223 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-flexvolume-dir\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.094033 kubelet[2499]: E0123 20:19:57.093974 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.9.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1diuq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.9.250:6443: connect: connection refused" interval="400ms" Jan 23 20:19:57.099027 kubelet[2499]: I0123 20:19:57.098995 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.099501 kubelet[2499]: E0123 20:19:57.099465 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.9.250:6443/api/v1/nodes\": dial tcp 10.244.9.250:6443: connect: connection refused" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.303548 kubelet[2499]: I0123 20:19:57.302873 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.303548 kubelet[2499]: E0123 20:19:57.303379 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.9.250:6443/api/v1/nodes\": dial tcp 10.244.9.250:6443: connect: connection refused" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.376483 containerd[1573]: time="2026-01-23T20:19:57.376404488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-1diuq.gb1.brightbox.com,Uid:ddc6a2a1a11eb4919f99ed6f5d7f705b,Namespace:kube-system,Attempt:0,}" Jan 23 20:19:57.393191 containerd[1573]: time="2026-01-23T20:19:57.393075965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-1diuq.gb1.brightbox.com,Uid:78bcfbbe3e9549c7a9f29ca2abd2ce30,Namespace:kube-system,Attempt:0,}" Jan 23 20:19:57.393711 containerd[1573]: time="2026-01-23T20:19:57.393675894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-1diuq.gb1.brightbox.com,Uid:d14723977819c460e4eb5aa373d05fc6,Namespace:kube-system,Attempt:0,}" Jan 23 20:19:57.495877 kubelet[2499]: E0123 20:19:57.495617 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.9.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1diuq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.9.250:6443: connect: connection refused" interval="800ms" Jan 23 20:19:57.526583 containerd[1573]: time="2026-01-23T20:19:57.526428706Z" level=info msg="connecting to shim bee76c1145a81bc64a0fad2589623721bf2b967897c1771037d91c1250cdb935" address="unix:///run/containerd/s/26d5ca541e8fa0bdf50c8c7ed607d991140c449df065d326eadc45bf6a6858d3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:19:57.537111 containerd[1573]: time="2026-01-23T20:19:57.537031312Z" level=info msg="connecting to shim be776e43fcf4501b8ac3ecf70e37235aa9027aa73a1a10c72926bd0df1ea2451" address="unix:///run/containerd/s/4ce2ed962bd799a56c0e8f3214735972f7e40860463ec7e4ba763c9bc7cac123" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:19:57.538070 containerd[1573]: time="2026-01-23T20:19:57.537998693Z" level=info msg="connecting to shim e386413670e1673e94bbdc470ea3590a9378834ccc10d05012f552a6bda71828" address="unix:///run/containerd/s/78c5c303b3b7e7c112a31b5d7739e076fa485bf1e986acb648ce4e596bf7c021" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:19:57.685430 systemd[1]: Started cri-containerd-be776e43fcf4501b8ac3ecf70e37235aa9027aa73a1a10c72926bd0df1ea2451.scope - libcontainer container be776e43fcf4501b8ac3ecf70e37235aa9027aa73a1a10c72926bd0df1ea2451. Jan 23 20:19:57.688331 systemd[1]: Started cri-containerd-bee76c1145a81bc64a0fad2589623721bf2b967897c1771037d91c1250cdb935.scope - libcontainer container bee76c1145a81bc64a0fad2589623721bf2b967897c1771037d91c1250cdb935. Jan 23 20:19:57.691568 systemd[1]: Started cri-containerd-e386413670e1673e94bbdc470ea3590a9378834ccc10d05012f552a6bda71828.scope - libcontainer container e386413670e1673e94bbdc470ea3590a9378834ccc10d05012f552a6bda71828. Jan 23 20:19:57.712812 kubelet[2499]: I0123 20:19:57.712580 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.713410 kubelet[2499]: E0123 20:19:57.713328 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.9.250:6443/api/v1/nodes\": dial tcp 10.244.9.250:6443: connect: connection refused" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:57.734791 kubelet[2499]: W0123 20:19:57.734578 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.9.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1diuq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:57.734791 kubelet[2499]: E0123 20:19:57.734690 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.9.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1diuq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:57.832169 containerd[1573]: time="2026-01-23T20:19:57.832062272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-1diuq.gb1.brightbox.com,Uid:ddc6a2a1a11eb4919f99ed6f5d7f705b,Namespace:kube-system,Attempt:0,} returns sandbox id \"be776e43fcf4501b8ac3ecf70e37235aa9027aa73a1a10c72926bd0df1ea2451\"" Jan 23 20:19:57.838933 containerd[1573]: time="2026-01-23T20:19:57.838807735Z" level=info msg="CreateContainer within sandbox \"be776e43fcf4501b8ac3ecf70e37235aa9027aa73a1a10c72926bd0df1ea2451\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 20:19:57.860134 containerd[1573]: time="2026-01-23T20:19:57.859654017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-1diuq.gb1.brightbox.com,Uid:78bcfbbe3e9549c7a9f29ca2abd2ce30,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee76c1145a81bc64a0fad2589623721bf2b967897c1771037d91c1250cdb935\"" Jan 23 20:19:57.863583 containerd[1573]: time="2026-01-23T20:19:57.863549236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-1diuq.gb1.brightbox.com,Uid:d14723977819c460e4eb5aa373d05fc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e386413670e1673e94bbdc470ea3590a9378834ccc10d05012f552a6bda71828\"" Jan 23 20:19:57.865939 containerd[1573]: time="2026-01-23T20:19:57.865907845Z" level=info msg="Container 193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:19:57.870933 containerd[1573]: time="2026-01-23T20:19:57.870897512Z" level=info msg="CreateContainer within sandbox \"bee76c1145a81bc64a0fad2589623721bf2b967897c1771037d91c1250cdb935\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 20:19:57.873722 containerd[1573]: time="2026-01-23T20:19:57.873079565Z" level=info msg="CreateContainer within sandbox \"e386413670e1673e94bbdc470ea3590a9378834ccc10d05012f552a6bda71828\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 20:19:57.881685 containerd[1573]: time="2026-01-23T20:19:57.881637957Z" level=info msg="CreateContainer within sandbox \"be776e43fcf4501b8ac3ecf70e37235aa9027aa73a1a10c72926bd0df1ea2451\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40\"" Jan 23 20:19:57.882920 containerd[1573]: time="2026-01-23T20:19:57.882886195Z" level=info msg="StartContainer for \"193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40\"" Jan 23 20:19:57.884690 containerd[1573]: time="2026-01-23T20:19:57.884654997Z" level=info msg="connecting to shim 193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40" address="unix:///run/containerd/s/4ce2ed962bd799a56c0e8f3214735972f7e40860463ec7e4ba763c9bc7cac123" protocol=ttrpc version=3 Jan 23 20:19:57.893591 containerd[1573]: time="2026-01-23T20:19:57.893517844Z" level=info msg="Container 4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:19:57.910058 containerd[1573]: time="2026-01-23T20:19:57.909957724Z" level=info msg="Container 2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:19:57.916982 containerd[1573]: time="2026-01-23T20:19:57.916931360Z" level=info msg="CreateContainer within sandbox \"bee76c1145a81bc64a0fad2589623721bf2b967897c1771037d91c1250cdb935\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946\"" Jan 23 20:19:57.919021 containerd[1573]: time="2026-01-23T20:19:57.918984692Z" level=info msg="StartContainer for \"4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946\"" Jan 23 20:19:57.920441 systemd[1]: Started cri-containerd-193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40.scope - libcontainer container 193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40. Jan 23 20:19:57.924220 containerd[1573]: time="2026-01-23T20:19:57.924184892Z" level=info msg="connecting to shim 4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946" address="unix:///run/containerd/s/26d5ca541e8fa0bdf50c8c7ed607d991140c449df065d326eadc45bf6a6858d3" protocol=ttrpc version=3 Jan 23 20:19:57.924503 containerd[1573]: time="2026-01-23T20:19:57.924470870Z" level=info msg="CreateContainer within sandbox \"e386413670e1673e94bbdc470ea3590a9378834ccc10d05012f552a6bda71828\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d\"" Jan 23 20:19:57.927867 containerd[1573]: time="2026-01-23T20:19:57.927801527Z" level=info msg="StartContainer for \"2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d\"" Jan 23 20:19:57.934258 containerd[1573]: time="2026-01-23T20:19:57.934202719Z" level=info msg="connecting to shim 2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d" address="unix:///run/containerd/s/78c5c303b3b7e7c112a31b5d7739e076fa485bf1e986acb648ce4e596bf7c021" protocol=ttrpc version=3 Jan 23 20:19:57.995331 systemd[1]: Started cri-containerd-2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d.scope - libcontainer container 2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d. Jan 23 20:19:57.997415 systemd[1]: Started cri-containerd-4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946.scope - libcontainer container 4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946. Jan 23 20:19:58.064370 containerd[1573]: time="2026-01-23T20:19:58.064304870Z" level=info msg="StartContainer for \"193b22798793675f0f0547e0e1fb411c94ecddc01a83febd43155c878079db40\" returns successfully" Jan 23 20:19:58.123793 containerd[1573]: time="2026-01-23T20:19:58.123646072Z" level=info msg="StartContainer for \"4491322e4e149658704e3de86575b586098260d8844726691f712419bd316946\" returns successfully" Jan 23 20:19:58.124893 kubelet[2499]: W0123 20:19:58.124850 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.9.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:58.130595 kubelet[2499]: E0123 20:19:58.128280 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.9.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:58.157971 containerd[1573]: time="2026-01-23T20:19:58.157904632Z" level=info msg="StartContainer for \"2cceedb1dc154b0b3c0df2e5ec370b9b299c566a34dc335432da5307fc5cd09d\" returns successfully" Jan 23 20:19:58.160000 kubelet[2499]: W0123 20:19:58.159928 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.9.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:58.160208 kubelet[2499]: E0123 20:19:58.160171 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.9.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:58.297375 kubelet[2499]: E0123 20:19:58.297149 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.9.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1diuq.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.9.250:6443: connect: connection refused" interval="1.6s" Jan 23 20:19:58.343130 kubelet[2499]: W0123 20:19:58.342966 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.9.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.9.250:6443: connect: connection refused Jan 23 20:19:58.343482 kubelet[2499]: E0123 20:19:58.343408 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.9.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.9.250:6443: connect: connection refused" logger="UnhandledError" Jan 23 20:19:58.518814 kubelet[2499]: I0123 20:19:58.518378 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:58.519609 kubelet[2499]: E0123 20:19:58.519151 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.9.250:6443/api/v1/nodes\": dial tcp 10.244.9.250:6443: connect: connection refused" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:58.995846 kubelet[2499]: E0123 20:19:58.995795 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:59.004873 kubelet[2499]: E0123 20:19:59.004822 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:19:59.007962 kubelet[2499]: E0123 20:19:59.007917 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:00.014220 kubelet[2499]: E0123 20:20:00.014171 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:00.014892 kubelet[2499]: E0123 20:20:00.014671 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:00.016334 kubelet[2499]: E0123 20:20:00.016309 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:00.123832 kubelet[2499]: I0123 20:20:00.123781 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.016999 kubelet[2499]: E0123 20:20:01.016944 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.019671 kubelet[2499]: E0123 20:20:01.019640 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1diuq.gb1.brightbox.com\" not found" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.572149 kubelet[2499]: I0123 20:20:01.572030 2499 kubelet_node_status.go:78] "Successfully registered node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.572149 kubelet[2499]: E0123 20:20:01.572132 2499 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-1diuq.gb1.brightbox.com\": node \"srv-1diuq.gb1.brightbox.com\" not found" Jan 23 20:20:01.593109 kubelet[2499]: I0123 20:20:01.593053 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.604438 kubelet[2499]: E0123 20:20:01.604284 2499 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-1diuq.gb1.brightbox.com.188d75a8e4159bf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-1diuq.gb1.brightbox.com,UID:srv-1diuq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-1diuq.gb1.brightbox.com,},FirstTimestamp:2026-01-23 20:19:56.868049909 +0000 UTC m=+1.105117917,LastTimestamp:2026-01-23 20:19:56.868049909 +0000 UTC m=+1.105117917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-1diuq.gb1.brightbox.com,}" Jan 23 20:20:01.622491 kubelet[2499]: E0123 20:20:01.622394 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1diuq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.622820 kubelet[2499]: I0123 20:20:01.622566 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.627833 kubelet[2499]: E0123 20:20:01.627782 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.627833 kubelet[2499]: I0123 20:20:01.627827 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.631823 kubelet[2499]: E0123 20:20:01.631784 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:01.870257 kubelet[2499]: I0123 20:20:01.869449 2499 apiserver.go:52] "Watching apiserver" Jan 23 20:20:01.893113 kubelet[2499]: I0123 20:20:01.893013 2499 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 20:20:02.015256 kubelet[2499]: I0123 20:20:02.015171 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:02.018839 kubelet[2499]: E0123 20:20:02.018543 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1diuq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:03.317179 kubelet[2499]: I0123 20:20:03.316793 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:03.332017 kubelet[2499]: W0123 20:20:03.331543 2499 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 20:20:03.802937 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-11.scope)... Jan 23 20:20:03.802972 systemd[1]: Reloading... Jan 23 20:20:03.931133 zram_generator::config[2815]: No configuration found. Jan 23 20:20:04.323831 systemd[1]: Reloading finished in 520 ms. Jan 23 20:20:04.359755 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:20:04.381057 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 20:20:04.382140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:20:04.382257 systemd[1]: kubelet.service: Consumed 1.706s CPU time, 129.1M memory peak. Jan 23 20:20:04.388432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:20:04.722608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:20:04.734944 (kubelet)[2879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 20:20:04.847483 kubelet[2879]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:20:04.847483 kubelet[2879]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 20:20:04.847483 kubelet[2879]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:20:04.848580 kubelet[2879]: I0123 20:20:04.848530 2879 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 20:20:04.866492 kubelet[2879]: I0123 20:20:04.866416 2879 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 20:20:04.868148 kubelet[2879]: I0123 20:20:04.867138 2879 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 20:20:04.868148 kubelet[2879]: I0123 20:20:04.867708 2879 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 20:20:04.872048 kubelet[2879]: I0123 20:20:04.871208 2879 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 20:20:04.881898 kubelet[2879]: I0123 20:20:04.881795 2879 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 20:20:04.883267 sudo[2892]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 20:20:04.884941 sudo[2892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 20:20:04.899378 kubelet[2879]: I0123 20:20:04.898256 2879 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 20:20:04.911374 kubelet[2879]: I0123 20:20:04.911315 2879 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 20:20:04.912748 kubelet[2879]: I0123 20:20:04.912672 2879 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 20:20:04.913297 kubelet[2879]: I0123 20:20:04.912747 2879 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-1diuq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 20:20:04.914754 kubelet[2879]: I0123 20:20:04.914000 2879 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 20:20:04.914754 kubelet[2879]: I0123 20:20:04.914033 2879 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 20:20:04.916696 kubelet[2879]: I0123 20:20:04.916665 2879 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:20:04.917004 kubelet[2879]: I0123 20:20:04.916979 2879 kubelet.go:446] "Attempting to sync node with API server" Jan 23 20:20:04.917097 kubelet[2879]: I0123 20:20:04.917020 2879 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 20:20:04.917097 kubelet[2879]: I0123 20:20:04.917065 2879 kubelet.go:352] "Adding apiserver pod source" Jan 23 20:20:04.917225 kubelet[2879]: I0123 20:20:04.917108 2879 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 20:20:04.926412 kubelet[2879]: I0123 20:20:04.926307 2879 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 20:20:04.932038 kubelet[2879]: I0123 20:20:04.930789 2879 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 20:20:04.932687 kubelet[2879]: I0123 20:20:04.932660 2879 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 20:20:04.932748 kubelet[2879]: I0123 20:20:04.932710 2879 server.go:1287] "Started kubelet" Jan 23 20:20:04.946716 kubelet[2879]: I0123 20:20:04.946673 2879 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 20:20:04.955235 kubelet[2879]: I0123 20:20:04.955169 2879 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 20:20:04.962972 kubelet[2879]: I0123 20:20:04.960234 2879 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 20:20:04.962972 kubelet[2879]: I0123 20:20:04.960854 2879 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 20:20:04.965115 kubelet[2879]: I0123 20:20:04.965043 2879 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 20:20:04.967950 kubelet[2879]: I0123 20:20:04.966901 2879 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 20:20:04.985195 kubelet[2879]: I0123 20:20:04.967236 2879 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 20:20:04.985195 kubelet[2879]: E0123 20:20:04.967510 2879 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-1diuq.gb1.brightbox.com\" not found" Jan 23 20:20:04.985195 kubelet[2879]: I0123 20:20:04.976004 2879 server.go:479] "Adding debug handlers to kubelet server" Jan 23 20:20:04.987010 kubelet[2879]: I0123 20:20:04.986508 2879 reconciler.go:26] "Reconciler: start to sync state" Jan 23 20:20:05.000586 kubelet[2879]: I0123 20:20:05.000503 2879 factory.go:221] Registration of the systemd container factory successfully Jan 23 20:20:05.000813 kubelet[2879]: I0123 20:20:05.000739 2879 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 20:20:05.008677 kubelet[2879]: E0123 20:20:05.008612 2879 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 20:20:05.013073 kubelet[2879]: I0123 20:20:05.012488 2879 factory.go:221] Registration of the containerd container factory successfully Jan 23 20:20:05.056549 kubelet[2879]: I0123 20:20:05.056383 2879 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 20:20:05.070065 kubelet[2879]: I0123 20:20:05.070004 2879 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 20:20:05.070065 kubelet[2879]: I0123 20:20:05.070069 2879 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 20:20:05.070065 kubelet[2879]: I0123 20:20:05.070117 2879 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 20:20:05.070065 kubelet[2879]: I0123 20:20:05.070131 2879 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 20:20:05.070065 kubelet[2879]: E0123 20:20:05.070228 2879 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 20:20:05.158864 kubelet[2879]: I0123 20:20:05.158809 2879 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 20:20:05.158864 kubelet[2879]: I0123 20:20:05.158843 2879 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 20:20:05.158864 kubelet[2879]: I0123 20:20:05.158873 2879 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:20:05.159256 kubelet[2879]: I0123 20:20:05.159157 2879 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 20:20:05.159256 kubelet[2879]: I0123 20:20:05.159178 2879 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 20:20:05.159256 kubelet[2879]: I0123 20:20:05.159210 2879 policy_none.go:49] "None policy: Start" Jan 23 20:20:05.159256 kubelet[2879]: I0123 20:20:05.159225 2879 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 20:20:05.159256 kubelet[2879]: I0123 20:20:05.159243 2879 state_mem.go:35] "Initializing new in-memory state store" Jan 23 20:20:05.159506 kubelet[2879]: I0123 20:20:05.159414 2879 state_mem.go:75] "Updated machine memory state" Jan 23 20:20:05.169434 kubelet[2879]: I0123 20:20:05.169195 2879 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 20:20:05.170725 kubelet[2879]: E0123 20:20:05.170388 2879 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 20:20:05.171690 kubelet[2879]: I0123 20:20:05.171650 2879 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 20:20:05.176710 kubelet[2879]: I0123 20:20:05.171708 2879 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 20:20:05.178631 kubelet[2879]: I0123 20:20:05.178374 2879 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 20:20:05.178724 kubelet[2879]: E0123 20:20:05.178673 2879 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 20:20:05.313428 kubelet[2879]: I0123 20:20:05.313250 2879 kubelet_node_status.go:75] "Attempting to register node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.344624 kubelet[2879]: I0123 20:20:05.344565 2879 kubelet_node_status.go:124] "Node was previously registered" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.344885 kubelet[2879]: I0123 20:20:05.344694 2879 kubelet_node_status.go:78] "Successfully registered node" node="srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.372674 kubelet[2879]: I0123 20:20:05.372613 2879 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.389230 kubelet[2879]: W0123 20:20:05.389183 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 20:20:05.389441 kubelet[2879]: E0123 20:20:05.389344 2879 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.470968 kubelet[2879]: I0123 20:20:05.469833 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-kubeconfig\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.470968 kubelet[2879]: I0123 20:20:05.469902 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.470968 kubelet[2879]: I0123 20:20:05.469945 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddc6a2a1a11eb4919f99ed6f5d7f705b-ca-certs\") pod \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" (UID: \"ddc6a2a1a11eb4919f99ed6f5d7f705b\") " pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.470968 kubelet[2879]: I0123 20:20:05.469975 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddc6a2a1a11eb4919f99ed6f5d7f705b-k8s-certs\") pod \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" (UID: \"ddc6a2a1a11eb4919f99ed6f5d7f705b\") " pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.470968 kubelet[2879]: I0123 20:20:05.470007 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddc6a2a1a11eb4919f99ed6f5d7f705b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-1diuq.gb1.brightbox.com\" (UID: \"ddc6a2a1a11eb4919f99ed6f5d7f705b\") " pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.471350 kubelet[2879]: I0123 20:20:05.470033 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-ca-certs\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.471350 kubelet[2879]: I0123 20:20:05.470060 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-flexvolume-dir\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.471350 kubelet[2879]: I0123 20:20:05.470252 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78bcfbbe3e9549c7a9f29ca2abd2ce30-k8s-certs\") pod \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" (UID: \"78bcfbbe3e9549c7a9f29ca2abd2ce30\") " pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.472640 kubelet[2879]: I0123 20:20:05.472507 2879 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.473793 kubelet[2879]: I0123 20:20:05.473062 2879 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.490301 kubelet[2879]: W0123 20:20:05.488951 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 20:20:05.492846 kubelet[2879]: W0123 20:20:05.491893 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 20:20:05.570988 kubelet[2879]: I0123 20:20:05.570566 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d14723977819c460e4eb5aa373d05fc6-kubeconfig\") pod \"kube-scheduler-srv-1diuq.gb1.brightbox.com\" (UID: \"d14723977819c460e4eb5aa373d05fc6\") " pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:05.622774 sudo[2892]: pam_unix(sudo:session): session closed for user root Jan 23 20:20:05.921234 kubelet[2879]: I0123 20:20:05.920473 2879 apiserver.go:52] "Watching apiserver" Jan 23 20:20:05.985958 kubelet[2879]: I0123 20:20:05.985911 2879 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 20:20:06.124574 kubelet[2879]: I0123 20:20:06.124497 2879 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:06.126471 kubelet[2879]: I0123 20:20:06.126239 2879 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:06.141951 kubelet[2879]: W0123 20:20:06.141576 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 20:20:06.141951 kubelet[2879]: E0123 20:20:06.141654 2879 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1diuq.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:06.141951 kubelet[2879]: W0123 20:20:06.141654 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 20:20:06.141951 kubelet[2879]: E0123 20:20:06.141735 2879 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-1diuq.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" Jan 23 20:20:06.183493 kubelet[2879]: I0123 20:20:06.183316 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-1diuq.gb1.brightbox.com" podStartSLOduration=3.183282118 podStartE2EDuration="3.183282118s" podCreationTimestamp="2026-01-23 20:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:20:06.169276246 +0000 UTC m=+1.421840920" watchObservedRunningTime="2026-01-23 20:20:06.183282118 +0000 UTC m=+1.435846755" Jan 23 20:20:06.201686 kubelet[2879]: I0123 20:20:06.201601 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-1diuq.gb1.brightbox.com" podStartSLOduration=1.201560521 podStartE2EDuration="1.201560521s" podCreationTimestamp="2026-01-23 20:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:20:06.184992837 +0000 UTC m=+1.437557541" watchObservedRunningTime="2026-01-23 20:20:06.201560521 +0000 UTC m=+1.454125164" Jan 23 20:20:06.217288 kubelet[2879]: I0123 20:20:06.215926 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-1diuq.gb1.brightbox.com" podStartSLOduration=1.215904443 podStartE2EDuration="1.215904443s" podCreationTimestamp="2026-01-23 20:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:20:06.202374505 +0000 UTC m=+1.454939201" watchObservedRunningTime="2026-01-23 20:20:06.215904443 +0000 UTC m=+1.468469105" Jan 23 20:20:07.547669 sudo[1893]: pam_unix(sudo:session): session closed for user root Jan 23 20:20:07.637247 sshd[1891]: Connection closed by 68.220.241.50 port 34886 Jan 23 20:20:07.638922 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Jan 23 20:20:07.647923 systemd[1]: sshd@8-10.244.9.250:22-68.220.241.50:34886.service: Deactivated successfully. Jan 23 20:20:07.651037 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 20:20:07.651661 systemd[1]: session-11.scope: Consumed 6.079s CPU time, 210.1M memory peak. Jan 23 20:20:07.655316 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Jan 23 20:20:07.658605 systemd-logind[1552]: Removed session 11. Jan 23 20:20:09.064958 kubelet[2879]: I0123 20:20:09.064905 2879 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 20:20:09.066251 containerd[1573]: time="2026-01-23T20:20:09.066141866Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 20:20:09.067022 kubelet[2879]: I0123 20:20:09.066884 2879 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 20:20:09.992724 systemd[1]: Created slice kubepods-besteffort-pod2a05d2d1_7731_4614_9e2d_3bba4f969824.slice - libcontainer container kubepods-besteffort-pod2a05d2d1_7731_4614_9e2d_3bba4f969824.slice. Jan 23 20:20:10.026043 systemd[1]: Created slice kubepods-burstable-podc933f622_cb47_4823_a800_3acb8b64ac71.slice - libcontainer container kubepods-burstable-podc933f622_cb47_4823_a800_3acb8b64ac71.slice. Jan 23 20:20:10.101973 kubelet[2879]: I0123 20:20:10.101903 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-kernel\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.103936 kubelet[2879]: I0123 20:20:10.103065 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-cgroup\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.103936 kubelet[2879]: I0123 20:20:10.103153 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbsg\" (UniqueName: \"kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-kube-api-access-zcbsg\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.103936 kubelet[2879]: I0123 20:20:10.103192 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a05d2d1-7731-4614-9e2d-3bba4f969824-xtables-lock\") pod \"kube-proxy-n6g2x\" (UID: \"2a05d2d1-7731-4614-9e2d-3bba4f969824\") " pod="kube-system/kube-proxy-n6g2x" Jan 23 20:20:10.103936 kubelet[2879]: I0123 20:20:10.103221 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-xtables-lock\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.103936 kubelet[2879]: I0123 20:20:10.103246 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-hubble-tls\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.103936 kubelet[2879]: I0123 20:20:10.103276 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-config-path\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104475 kubelet[2879]: I0123 20:20:10.103308 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-etc-cni-netd\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104475 kubelet[2879]: I0123 20:20:10.103339 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-lib-modules\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104475 kubelet[2879]: I0123 20:20:10.103386 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-hostproc\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104475 kubelet[2879]: I0123 20:20:10.103437 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a05d2d1-7731-4614-9e2d-3bba4f969824-lib-modules\") pod \"kube-proxy-n6g2x\" (UID: \"2a05d2d1-7731-4614-9e2d-3bba4f969824\") " pod="kube-system/kube-proxy-n6g2x" Jan 23 20:20:10.104475 kubelet[2879]: I0123 20:20:10.103484 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-run\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104475 kubelet[2879]: I0123 20:20:10.103514 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-net\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104758 kubelet[2879]: I0123 20:20:10.103560 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7sgd\" (UniqueName: \"kubernetes.io/projected/2a05d2d1-7731-4614-9e2d-3bba4f969824-kube-api-access-k7sgd\") pod \"kube-proxy-n6g2x\" (UID: \"2a05d2d1-7731-4614-9e2d-3bba4f969824\") " pod="kube-system/kube-proxy-n6g2x" Jan 23 20:20:10.104758 kubelet[2879]: I0123 20:20:10.103592 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-bpf-maps\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104758 kubelet[2879]: I0123 20:20:10.103622 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c933f622-cb47-4823-a800-3acb8b64ac71-clustermesh-secrets\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.104758 kubelet[2879]: I0123 20:20:10.103650 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a05d2d1-7731-4614-9e2d-3bba4f969824-kube-proxy\") pod \"kube-proxy-n6g2x\" (UID: \"2a05d2d1-7731-4614-9e2d-3bba4f969824\") " pod="kube-system/kube-proxy-n6g2x" Jan 23 20:20:10.104758 kubelet[2879]: I0123 20:20:10.103676 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cni-path\") pod \"cilium-4fwrt\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " pod="kube-system/cilium-4fwrt" Jan 23 20:20:10.177865 systemd[1]: Created slice kubepods-besteffort-pod3764a13a_246f_4d12_bf52_63a9b10730a2.slice - libcontainer container kubepods-besteffort-pod3764a13a_246f_4d12_bf52_63a9b10730a2.slice. Jan 23 20:20:10.187469 kubelet[2879]: I0123 20:20:10.187367 2879 status_manager.go:890] "Failed to get status for pod" podUID="3764a13a-246f-4d12-bf52-63a9b10730a2" pod="kube-system/cilium-operator-6c4d7847fc-h2cbd" err="pods \"cilium-operator-6c4d7847fc-h2cbd\" is forbidden: User \"system:node:srv-1diuq.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-1diuq.gb1.brightbox.com' and this object" Jan 23 20:20:10.306338 kubelet[2879]: I0123 20:20:10.306061 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3764a13a-246f-4d12-bf52-63a9b10730a2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h2cbd\" (UID: \"3764a13a-246f-4d12-bf52-63a9b10730a2\") " pod="kube-system/cilium-operator-6c4d7847fc-h2cbd" Jan 23 20:20:10.307429 kubelet[2879]: I0123 20:20:10.307158 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgbjx\" (UniqueName: \"kubernetes.io/projected/3764a13a-246f-4d12-bf52-63a9b10730a2-kube-api-access-hgbjx\") pod \"cilium-operator-6c4d7847fc-h2cbd\" (UID: \"3764a13a-246f-4d12-bf52-63a9b10730a2\") " pod="kube-system/cilium-operator-6c4d7847fc-h2cbd" Jan 23 20:20:10.312046 containerd[1573]: time="2026-01-23T20:20:10.311337093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6g2x,Uid:2a05d2d1-7731-4614-9e2d-3bba4f969824,Namespace:kube-system,Attempt:0,}" Jan 23 20:20:10.337680 containerd[1573]: time="2026-01-23T20:20:10.337250846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4fwrt,Uid:c933f622-cb47-4823-a800-3acb8b64ac71,Namespace:kube-system,Attempt:0,}" Jan 23 20:20:10.337680 containerd[1573]: time="2026-01-23T20:20:10.337387894Z" level=info msg="connecting to shim a76a19c35f4d77a17d1f2794f8fbdb5ade5fe90dc7ae14c3d3619424f90717b6" address="unix:///run/containerd/s/173b5240049eaa03968cb01539e4719f63162601e35ce2b1614c149d756f2e73" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:20:10.368045 containerd[1573]: time="2026-01-23T20:20:10.367987428Z" level=info msg="connecting to shim 273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b" address="unix:///run/containerd/s/d93fc139083fd2dbe7900ace7ff1297db990575870f88039095466d63775e09b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:20:10.382386 systemd[1]: Started cri-containerd-a76a19c35f4d77a17d1f2794f8fbdb5ade5fe90dc7ae14c3d3619424f90717b6.scope - libcontainer container a76a19c35f4d77a17d1f2794f8fbdb5ade5fe90dc7ae14c3d3619424f90717b6. Jan 23 20:20:10.431532 systemd[1]: Started cri-containerd-273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b.scope - libcontainer container 273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b. Jan 23 20:20:10.484837 containerd[1573]: time="2026-01-23T20:20:10.484782069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6g2x,Uid:2a05d2d1-7731-4614-9e2d-3bba4f969824,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76a19c35f4d77a17d1f2794f8fbdb5ade5fe90dc7ae14c3d3619424f90717b6\"" Jan 23 20:20:10.487825 containerd[1573]: time="2026-01-23T20:20:10.487790867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h2cbd,Uid:3764a13a-246f-4d12-bf52-63a9b10730a2,Namespace:kube-system,Attempt:0,}" Jan 23 20:20:10.491848 containerd[1573]: time="2026-01-23T20:20:10.491810637Z" level=info msg="CreateContainer within sandbox \"a76a19c35f4d77a17d1f2794f8fbdb5ade5fe90dc7ae14c3d3619424f90717b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 20:20:10.504534 containerd[1573]: time="2026-01-23T20:20:10.504483816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4fwrt,Uid:c933f622-cb47-4823-a800-3acb8b64ac71,Namespace:kube-system,Attempt:0,} returns sandbox id \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\"" Jan 23 20:20:10.507851 containerd[1573]: time="2026-01-23T20:20:10.507805503Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 20:20:10.550147 containerd[1573]: time="2026-01-23T20:20:10.549862241Z" level=info msg="Container 349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:10.553116 containerd[1573]: time="2026-01-23T20:20:10.550410268Z" level=info msg="connecting to shim 80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9" address="unix:///run/containerd/s/b49eb1b09a57a8eeb24cf156f51d2b7c5a6f611624674ebc185854550a574fe1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:20:10.582431 containerd[1573]: time="2026-01-23T20:20:10.582284421Z" level=info msg="CreateContainer within sandbox \"a76a19c35f4d77a17d1f2794f8fbdb5ade5fe90dc7ae14c3d3619424f90717b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a\"" Jan 23 20:20:10.585848 containerd[1573]: time="2026-01-23T20:20:10.585794812Z" level=info msg="StartContainer for \"349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a\"" Jan 23 20:20:10.591318 containerd[1573]: time="2026-01-23T20:20:10.591191582Z" level=info msg="connecting to shim 349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a" address="unix:///run/containerd/s/173b5240049eaa03968cb01539e4719f63162601e35ce2b1614c149d756f2e73" protocol=ttrpc version=3 Jan 23 20:20:10.625065 systemd[1]: Started cri-containerd-80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9.scope - libcontainer container 80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9. Jan 23 20:20:10.652348 systemd[1]: Started cri-containerd-349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a.scope - libcontainer container 349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a. Jan 23 20:20:10.722907 containerd[1573]: time="2026-01-23T20:20:10.722719048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h2cbd,Uid:3764a13a-246f-4d12-bf52-63a9b10730a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\"" Jan 23 20:20:10.772636 containerd[1573]: time="2026-01-23T20:20:10.772483167Z" level=info msg="StartContainer for \"349647c8785fd9d8e834d58673d6936d353a501d05da2cc0be3c6a5317838d2a\" returns successfully" Jan 23 20:20:11.186843 kubelet[2879]: I0123 20:20:11.186441 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n6g2x" podStartSLOduration=2.186422245 podStartE2EDuration="2.186422245s" podCreationTimestamp="2026-01-23 20:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:20:11.174128516 +0000 UTC m=+6.426693175" watchObservedRunningTime="2026-01-23 20:20:11.186422245 +0000 UTC m=+6.438986902" Jan 23 20:20:17.808095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714962882.mount: Deactivated successfully. Jan 23 20:20:21.156261 containerd[1573]: time="2026-01-23T20:20:21.156166821Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:20:21.158512 containerd[1573]: time="2026-01-23T20:20:21.158446390Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 20:20:21.170813 containerd[1573]: time="2026-01-23T20:20:21.170188636Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:20:21.173876 containerd[1573]: time="2026-01-23T20:20:21.173825102Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.665970432s" Jan 23 20:20:21.174077 containerd[1573]: time="2026-01-23T20:20:21.174044148Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 20:20:21.179277 containerd[1573]: time="2026-01-23T20:20:21.179244264Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 20:20:21.183341 containerd[1573]: time="2026-01-23T20:20:21.183303986Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 20:20:21.216364 containerd[1573]: time="2026-01-23T20:20:21.216305008Z" level=info msg="Container 86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:21.219044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823269262.mount: Deactivated successfully. Jan 23 20:20:21.233528 containerd[1573]: time="2026-01-23T20:20:21.233453740Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\"" Jan 23 20:20:21.236558 containerd[1573]: time="2026-01-23T20:20:21.236456305Z" level=info msg="StartContainer for \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\"" Jan 23 20:20:21.240239 containerd[1573]: time="2026-01-23T20:20:21.240198658Z" level=info msg="connecting to shim 86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272" address="unix:///run/containerd/s/d93fc139083fd2dbe7900ace7ff1297db990575870f88039095466d63775e09b" protocol=ttrpc version=3 Jan 23 20:20:21.280378 systemd[1]: Started cri-containerd-86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272.scope - libcontainer container 86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272. Jan 23 20:20:21.345186 containerd[1573]: time="2026-01-23T20:20:21.344228603Z" level=info msg="StartContainer for \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\" returns successfully" Jan 23 20:20:21.369891 systemd[1]: cri-containerd-86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272.scope: Deactivated successfully. Jan 23 20:20:21.371003 systemd[1]: cri-containerd-86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272.scope: Consumed 40ms CPU time, 6.4M memory peak, 12K read from disk, 3.2M written to disk. Jan 23 20:20:21.571114 containerd[1573]: time="2026-01-23T20:20:21.570838518Z" level=info msg="received container exit event container_id:\"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\" id:\"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\" pid:3298 exited_at:{seconds:1769199621 nanos:374287692}" Jan 23 20:20:22.210946 containerd[1573]: time="2026-01-23T20:20:22.210842943Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 20:20:22.215060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272-rootfs.mount: Deactivated successfully. Jan 23 20:20:22.239347 containerd[1573]: time="2026-01-23T20:20:22.239263680Z" level=info msg="Container 59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:22.243439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682891048.mount: Deactivated successfully. Jan 23 20:20:22.250221 containerd[1573]: time="2026-01-23T20:20:22.250138096Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\"" Jan 23 20:20:22.251971 containerd[1573]: time="2026-01-23T20:20:22.251926474Z" level=info msg="StartContainer for \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\"" Jan 23 20:20:22.254295 containerd[1573]: time="2026-01-23T20:20:22.254241390Z" level=info msg="connecting to shim 59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de" address="unix:///run/containerd/s/d93fc139083fd2dbe7900ace7ff1297db990575870f88039095466d63775e09b" protocol=ttrpc version=3 Jan 23 20:20:22.296419 systemd[1]: Started cri-containerd-59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de.scope - libcontainer container 59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de. Jan 23 20:20:22.368554 containerd[1573]: time="2026-01-23T20:20:22.368385941Z" level=info msg="StartContainer for \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\" returns successfully" Jan 23 20:20:22.387649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 20:20:22.388396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:20:22.388919 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:20:22.392528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:20:22.398144 containerd[1573]: time="2026-01-23T20:20:22.397291867Z" level=info msg="received container exit event container_id:\"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\" id:\"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\" pid:3342 exited_at:{seconds:1769199622 nanos:396927692}" Jan 23 20:20:22.397763 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 20:20:22.399804 systemd[1]: cri-containerd-59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de.scope: Deactivated successfully. Jan 23 20:20:22.441588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:20:23.215134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de-rootfs.mount: Deactivated successfully. Jan 23 20:20:23.228145 containerd[1573]: time="2026-01-23T20:20:23.227822916Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 20:20:23.258298 containerd[1573]: time="2026-01-23T20:20:23.258230869Z" level=info msg="Container ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:23.283232 containerd[1573]: time="2026-01-23T20:20:23.283078312Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\"" Jan 23 20:20:23.285371 containerd[1573]: time="2026-01-23T20:20:23.285214195Z" level=info msg="StartContainer for \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\"" Jan 23 20:20:23.291583 containerd[1573]: time="2026-01-23T20:20:23.289741137Z" level=info msg="connecting to shim ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761" address="unix:///run/containerd/s/d93fc139083fd2dbe7900ace7ff1297db990575870f88039095466d63775e09b" protocol=ttrpc version=3 Jan 23 20:20:23.336518 systemd[1]: Started cri-containerd-ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761.scope - libcontainer container ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761. Jan 23 20:20:23.463773 containerd[1573]: time="2026-01-23T20:20:23.463681649Z" level=info msg="StartContainer for \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\" returns successfully" Jan 23 20:20:23.471341 systemd[1]: cri-containerd-ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761.scope: Deactivated successfully. Jan 23 20:20:23.471793 systemd[1]: cri-containerd-ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761.scope: Consumed 56ms CPU time, 4.3M memory peak, 1M read from disk. Jan 23 20:20:23.477419 containerd[1573]: time="2026-01-23T20:20:23.477276588Z" level=info msg="received container exit event container_id:\"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\" id:\"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\" pid:3401 exited_at:{seconds:1769199623 nanos:476847681}" Jan 23 20:20:23.535655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761-rootfs.mount: Deactivated successfully. Jan 23 20:20:24.152481 containerd[1573]: time="2026-01-23T20:20:24.152360566Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:20:24.153622 containerd[1573]: time="2026-01-23T20:20:24.153371573Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 20:20:24.154785 containerd[1573]: time="2026-01-23T20:20:24.154740180Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:20:24.156960 containerd[1573]: time="2026-01-23T20:20:24.156916641Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.977483076s" Jan 23 20:20:24.157055 containerd[1573]: time="2026-01-23T20:20:24.156980354Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 20:20:24.161712 containerd[1573]: time="2026-01-23T20:20:24.161653058Z" level=info msg="CreateContainer within sandbox \"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 20:20:24.172896 containerd[1573]: time="2026-01-23T20:20:24.172816245Z" level=info msg="Container f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:24.183826 containerd[1573]: time="2026-01-23T20:20:24.183772187Z" level=info msg="CreateContainer within sandbox \"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\"" Jan 23 20:20:24.187006 containerd[1573]: time="2026-01-23T20:20:24.186968223Z" level=info msg="StartContainer for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\"" Jan 23 20:20:24.189666 containerd[1573]: time="2026-01-23T20:20:24.189618373Z" level=info msg="connecting to shim f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1" address="unix:///run/containerd/s/b49eb1b09a57a8eeb24cf156f51d2b7c5a6f611624674ebc185854550a574fe1" protocol=ttrpc version=3 Jan 23 20:20:24.229383 systemd[1]: Started cri-containerd-f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1.scope - libcontainer container f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1. Jan 23 20:20:24.242547 containerd[1573]: time="2026-01-23T20:20:24.242437450Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 20:20:24.275867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260765534.mount: Deactivated successfully. Jan 23 20:20:24.281554 containerd[1573]: time="2026-01-23T20:20:24.278248519Z" level=info msg="Container 1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:24.315938 containerd[1573]: time="2026-01-23T20:20:24.315871077Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\"" Jan 23 20:20:24.319249 containerd[1573]: time="2026-01-23T20:20:24.319197821Z" level=info msg="StartContainer for \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\"" Jan 23 20:20:24.323101 containerd[1573]: time="2026-01-23T20:20:24.322846609Z" level=info msg="connecting to shim 1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e" address="unix:///run/containerd/s/d93fc139083fd2dbe7900ace7ff1297db990575870f88039095466d63775e09b" protocol=ttrpc version=3 Jan 23 20:20:24.374334 systemd[1]: Started cri-containerd-1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e.scope - libcontainer container 1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e. Jan 23 20:20:24.404140 containerd[1573]: time="2026-01-23T20:20:24.402972851Z" level=info msg="StartContainer for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" returns successfully" Jan 23 20:20:24.451841 systemd[1]: cri-containerd-1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e.scope: Deactivated successfully. Jan 23 20:20:24.458505 containerd[1573]: time="2026-01-23T20:20:24.458454346Z" level=info msg="received container exit event container_id:\"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\" id:\"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\" pid:3468 exited_at:{seconds:1769199624 nanos:452631502}" Jan 23 20:20:24.463575 containerd[1573]: time="2026-01-23T20:20:24.463463848Z" level=info msg="StartContainer for \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\" returns successfully" Jan 23 20:20:25.216639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e-rootfs.mount: Deactivated successfully. Jan 23 20:20:25.280513 containerd[1573]: time="2026-01-23T20:20:25.280414057Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 20:20:25.304113 containerd[1573]: time="2026-01-23T20:20:25.301365625Z" level=info msg="Container dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:25.316067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273525323.mount: Deactivated successfully. Jan 23 20:20:25.330847 containerd[1573]: time="2026-01-23T20:20:25.330748416Z" level=info msg="CreateContainer within sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\"" Jan 23 20:20:25.333645 containerd[1573]: time="2026-01-23T20:20:25.332630272Z" level=info msg="StartContainer for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\"" Jan 23 20:20:25.337108 containerd[1573]: time="2026-01-23T20:20:25.336984003Z" level=info msg="connecting to shim dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7" address="unix:///run/containerd/s/d93fc139083fd2dbe7900ace7ff1297db990575870f88039095466d63775e09b" protocol=ttrpc version=3 Jan 23 20:20:25.407318 systemd[1]: Started cri-containerd-dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7.scope - libcontainer container dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7. Jan 23 20:20:25.519946 containerd[1573]: time="2026-01-23T20:20:25.517541391Z" level=info msg="StartContainer for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" returns successfully" Jan 23 20:20:25.577333 kubelet[2879]: I0123 20:20:25.577234 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h2cbd" podStartSLOduration=2.145157838 podStartE2EDuration="15.577175616s" podCreationTimestamp="2026-01-23 20:20:10 +0000 UTC" firstStartedPulling="2026-01-23 20:20:10.726622002 +0000 UTC m=+5.979186638" lastFinishedPulling="2026-01-23 20:20:24.158639774 +0000 UTC m=+19.411204416" observedRunningTime="2026-01-23 20:20:25.470399482 +0000 UTC m=+20.722964176" watchObservedRunningTime="2026-01-23 20:20:25.577175616 +0000 UTC m=+20.829740293" Jan 23 20:20:25.960161 kubelet[2879]: I0123 20:20:25.959971 2879 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 20:20:26.062242 systemd[1]: Created slice kubepods-burstable-pod05af8bf3_0faa_4c7d_a8c0_f91fc69fdbb9.slice - libcontainer container kubepods-burstable-pod05af8bf3_0faa_4c7d_a8c0_f91fc69fdbb9.slice. Jan 23 20:20:26.078996 systemd[1]: Created slice kubepods-burstable-pod94082976_6905_4e52_96b0_cc8281712824.slice - libcontainer container kubepods-burstable-pod94082976_6905_4e52_96b0_cc8281712824.slice. Jan 23 20:20:26.143032 kubelet[2879]: I0123 20:20:26.142868 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5npx\" (UniqueName: \"kubernetes.io/projected/94082976-6905-4e52-96b0-cc8281712824-kube-api-access-h5npx\") pod \"coredns-668d6bf9bc-ls7bv\" (UID: \"94082976-6905-4e52-96b0-cc8281712824\") " pod="kube-system/coredns-668d6bf9bc-ls7bv" Jan 23 20:20:26.143703 kubelet[2879]: I0123 20:20:26.143456 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05af8bf3-0faa-4c7d-a8c0-f91fc69fdbb9-config-volume\") pod \"coredns-668d6bf9bc-6kt28\" (UID: \"05af8bf3-0faa-4c7d-a8c0-f91fc69fdbb9\") " pod="kube-system/coredns-668d6bf9bc-6kt28" Jan 23 20:20:26.143703 kubelet[2879]: I0123 20:20:26.143500 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbnt\" (UniqueName: \"kubernetes.io/projected/05af8bf3-0faa-4c7d-a8c0-f91fc69fdbb9-kube-api-access-4lbnt\") pod \"coredns-668d6bf9bc-6kt28\" (UID: \"05af8bf3-0faa-4c7d-a8c0-f91fc69fdbb9\") " pod="kube-system/coredns-668d6bf9bc-6kt28" Jan 23 20:20:26.143703 kubelet[2879]: I0123 20:20:26.143541 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94082976-6905-4e52-96b0-cc8281712824-config-volume\") pod \"coredns-668d6bf9bc-ls7bv\" (UID: \"94082976-6905-4e52-96b0-cc8281712824\") " pod="kube-system/coredns-668d6bf9bc-ls7bv" Jan 23 20:20:26.375623 containerd[1573]: time="2026-01-23T20:20:26.375419695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6kt28,Uid:05af8bf3-0faa-4c7d-a8c0-f91fc69fdbb9,Namespace:kube-system,Attempt:0,}" Jan 23 20:20:26.391252 containerd[1573]: time="2026-01-23T20:20:26.388971922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ls7bv,Uid:94082976-6905-4e52-96b0-cc8281712824,Namespace:kube-system,Attempt:0,}" Jan 23 20:20:28.708560 systemd-networkd[1488]: cilium_host: Link UP Jan 23 20:20:28.710007 systemd-networkd[1488]: cilium_net: Link UP Jan 23 20:20:28.711077 systemd-networkd[1488]: cilium_net: Gained carrier Jan 23 20:20:28.711701 systemd-networkd[1488]: cilium_host: Gained carrier Jan 23 20:20:28.731232 systemd-networkd[1488]: cilium_net: Gained IPv6LL Jan 23 20:20:28.746666 systemd-networkd[1488]: cilium_host: Gained IPv6LL Jan 23 20:20:28.904441 systemd-networkd[1488]: cilium_vxlan: Link UP Jan 23 20:20:28.904456 systemd-networkd[1488]: cilium_vxlan: Gained carrier Jan 23 20:20:29.641995 kernel: NET: Registered PF_ALG protocol family Jan 23 20:20:29.980432 systemd-networkd[1488]: cilium_vxlan: Gained IPv6LL Jan 23 20:20:30.786437 systemd-networkd[1488]: lxc_health: Link UP Jan 23 20:20:30.790749 systemd-networkd[1488]: lxc_health: Gained carrier Jan 23 20:20:31.042163 systemd-networkd[1488]: lxc666e4eebdc3d: Link UP Jan 23 20:20:31.067816 kernel: eth0: renamed from tmp30fef Jan 23 20:20:31.071894 systemd-networkd[1488]: lxcd8ccb8e25423: Link UP Jan 23 20:20:31.076670 systemd-networkd[1488]: lxc666e4eebdc3d: Gained carrier Jan 23 20:20:31.079211 kernel: eth0: renamed from tmp95056 Jan 23 20:20:31.081149 systemd-networkd[1488]: lxcd8ccb8e25423: Gained carrier Jan 23 20:20:32.156448 systemd-networkd[1488]: lxc666e4eebdc3d: Gained IPv6LL Jan 23 20:20:32.351221 systemd-networkd[1488]: lxc_health: Gained IPv6LL Jan 23 20:20:32.382146 kubelet[2879]: I0123 20:20:32.381978 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4fwrt" podStartSLOduration=12.710502831 podStartE2EDuration="23.381917223s" podCreationTimestamp="2026-01-23 20:20:09 +0000 UTC" firstStartedPulling="2026-01-23 20:20:10.507325604 +0000 UTC m=+5.759890244" lastFinishedPulling="2026-01-23 20:20:21.178739988 +0000 UTC m=+16.431304636" observedRunningTime="2026-01-23 20:20:26.354175579 +0000 UTC m=+21.606740238" watchObservedRunningTime="2026-01-23 20:20:32.381917223 +0000 UTC m=+27.634481873" Jan 23 20:20:32.541526 systemd-networkd[1488]: lxcd8ccb8e25423: Gained IPv6LL Jan 23 20:20:37.076345 containerd[1573]: time="2026-01-23T20:20:37.074047417Z" level=info msg="connecting to shim 950560643f83fb5edb83eeae83a3d3a4406bab29e7063db0c1f9c4f3642abd23" address="unix:///run/containerd/s/9046ccc3242ddf42e1c5902b808a0cb8d0428fdee2018cf546310704aabb5eb0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:20:37.088413 containerd[1573]: time="2026-01-23T20:20:37.088288590Z" level=info msg="connecting to shim 30fef90ad4c18590cad82f9220670676c0e1b4ea8351316911f9608404da4a3e" address="unix:///run/containerd/s/fe9e91f1769aeb9b27a2ddeb47ce3ee21548c6695a003add8a9e58935087be68" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:20:37.169420 systemd[1]: Started cri-containerd-30fef90ad4c18590cad82f9220670676c0e1b4ea8351316911f9608404da4a3e.scope - libcontainer container 30fef90ad4c18590cad82f9220670676c0e1b4ea8351316911f9608404da4a3e. Jan 23 20:20:37.182060 systemd[1]: Started cri-containerd-950560643f83fb5edb83eeae83a3d3a4406bab29e7063db0c1f9c4f3642abd23.scope - libcontainer container 950560643f83fb5edb83eeae83a3d3a4406bab29e7063db0c1f9c4f3642abd23. Jan 23 20:20:37.312117 containerd[1573]: time="2026-01-23T20:20:37.312001355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6kt28,Uid:05af8bf3-0faa-4c7d-a8c0-f91fc69fdbb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"30fef90ad4c18590cad82f9220670676c0e1b4ea8351316911f9608404da4a3e\"" Jan 23 20:20:37.313539 containerd[1573]: time="2026-01-23T20:20:37.313264853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ls7bv,Uid:94082976-6905-4e52-96b0-cc8281712824,Namespace:kube-system,Attempt:0,} returns sandbox id \"950560643f83fb5edb83eeae83a3d3a4406bab29e7063db0c1f9c4f3642abd23\"" Jan 23 20:20:37.320041 containerd[1573]: time="2026-01-23T20:20:37.319969277Z" level=info msg="CreateContainer within sandbox \"30fef90ad4c18590cad82f9220670676c0e1b4ea8351316911f9608404da4a3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 20:20:37.320625 containerd[1573]: time="2026-01-23T20:20:37.320309518Z" level=info msg="CreateContainer within sandbox \"950560643f83fb5edb83eeae83a3d3a4406bab29e7063db0c1f9c4f3642abd23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 20:20:37.338775 containerd[1573]: time="2026-01-23T20:20:37.338398085Z" level=info msg="Container 043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:37.341295 containerd[1573]: time="2026-01-23T20:20:37.339760311Z" level=info msg="Container 11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:20:37.351681 containerd[1573]: time="2026-01-23T20:20:37.351645074Z" level=info msg="CreateContainer within sandbox \"950560643f83fb5edb83eeae83a3d3a4406bab29e7063db0c1f9c4f3642abd23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534\"" Jan 23 20:20:37.353031 containerd[1573]: time="2026-01-23T20:20:37.352964332Z" level=info msg="StartContainer for \"11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534\"" Jan 23 20:20:37.357467 containerd[1573]: time="2026-01-23T20:20:37.357382181Z" level=info msg="CreateContainer within sandbox \"30fef90ad4c18590cad82f9220670676c0e1b4ea8351316911f9608404da4a3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e\"" Jan 23 20:20:37.364685 containerd[1573]: time="2026-01-23T20:20:37.364453709Z" level=info msg="StartContainer for \"043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e\"" Jan 23 20:20:37.364685 containerd[1573]: time="2026-01-23T20:20:37.364573903Z" level=info msg="connecting to shim 11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534" address="unix:///run/containerd/s/9046ccc3242ddf42e1c5902b808a0cb8d0428fdee2018cf546310704aabb5eb0" protocol=ttrpc version=3 Jan 23 20:20:37.371488 containerd[1573]: time="2026-01-23T20:20:37.371427431Z" level=info msg="connecting to shim 043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e" address="unix:///run/containerd/s/fe9e91f1769aeb9b27a2ddeb47ce3ee21548c6695a003add8a9e58935087be68" protocol=ttrpc version=3 Jan 23 20:20:37.398647 systemd[1]: Started cri-containerd-11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534.scope - libcontainer container 11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534. Jan 23 20:20:37.413314 systemd[1]: Started cri-containerd-043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e.scope - libcontainer container 043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e. Jan 23 20:20:37.472006 containerd[1573]: time="2026-01-23T20:20:37.471925571Z" level=info msg="StartContainer for \"11f96aced440cf6704b4796aa6d90d59ff65ac76993184f50cea89db92ef7534\" returns successfully" Jan 23 20:20:37.502675 containerd[1573]: time="2026-01-23T20:20:37.502551053Z" level=info msg="StartContainer for \"043b0326010651f3b0597337eb02c5563ad38b23416bb45e5ef0dadf475a966e\" returns successfully" Jan 23 20:20:38.044111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988868078.mount: Deactivated successfully. Jan 23 20:20:38.390181 kubelet[2879]: I0123 20:20:38.389928 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6kt28" podStartSLOduration=28.387402756 podStartE2EDuration="28.387402756s" podCreationTimestamp="2026-01-23 20:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:20:38.385211701 +0000 UTC m=+33.637776363" watchObservedRunningTime="2026-01-23 20:20:38.387402756 +0000 UTC m=+33.639967413" Jan 23 20:20:39.390116 kubelet[2879]: I0123 20:20:39.388953 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ls7bv" podStartSLOduration=29.388923701 podStartE2EDuration="29.388923701s" podCreationTimestamp="2026-01-23 20:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:20:38.431171778 +0000 UTC m=+33.683736459" watchObservedRunningTime="2026-01-23 20:20:39.388923701 +0000 UTC m=+34.641488357" Jan 23 20:21:15.471806 systemd[1]: Started sshd@9-10.244.9.250:22-68.220.241.50:47164.service - OpenSSH per-connection server daemon (68.220.241.50:47164). Jan 23 20:21:16.123818 sshd[4214]: Accepted publickey for core from 68.220.241.50 port 47164 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:16.126388 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:16.162274 systemd-logind[1552]: New session 12 of user core. Jan 23 20:21:16.180353 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 20:21:17.093477 sshd[4217]: Connection closed by 68.220.241.50 port 47164 Jan 23 20:21:17.095364 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:17.105887 systemd[1]: sshd@9-10.244.9.250:22-68.220.241.50:47164.service: Deactivated successfully. Jan 23 20:21:17.110152 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 20:21:17.111918 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Jan 23 20:21:17.114231 systemd-logind[1552]: Removed session 12. Jan 23 20:21:22.196806 systemd[1]: Started sshd@10-10.244.9.250:22-68.220.241.50:47170.service - OpenSSH per-connection server daemon (68.220.241.50:47170). Jan 23 20:21:22.789164 sshd[4231]: Accepted publickey for core from 68.220.241.50 port 47170 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:22.790950 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:22.799563 systemd-logind[1552]: New session 13 of user core. Jan 23 20:21:22.808460 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 20:21:23.305886 sshd[4234]: Connection closed by 68.220.241.50 port 47170 Jan 23 20:21:23.307055 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:23.312681 systemd[1]: sshd@10-10.244.9.250:22-68.220.241.50:47170.service: Deactivated successfully. Jan 23 20:21:23.315698 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 20:21:23.317916 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Jan 23 20:21:23.320572 systemd-logind[1552]: Removed session 13. Jan 23 20:21:28.421847 systemd[1]: Started sshd@11-10.244.9.250:22-68.220.241.50:59064.service - OpenSSH per-connection server daemon (68.220.241.50:59064). Jan 23 20:21:29.001889 sshd[4247]: Accepted publickey for core from 68.220.241.50 port 59064 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:29.004175 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:29.013163 systemd-logind[1552]: New session 14 of user core. Jan 23 20:21:29.021370 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 20:21:29.502067 sshd[4250]: Connection closed by 68.220.241.50 port 59064 Jan 23 20:21:29.503227 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:29.509863 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Jan 23 20:21:29.511413 systemd[1]: sshd@11-10.244.9.250:22-68.220.241.50:59064.service: Deactivated successfully. Jan 23 20:21:29.514847 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 20:21:29.518578 systemd-logind[1552]: Removed session 14. Jan 23 20:21:34.614339 systemd[1]: Started sshd@12-10.244.9.250:22-68.220.241.50:35290.service - OpenSSH per-connection server daemon (68.220.241.50:35290). Jan 23 20:21:35.225120 sshd[4264]: Accepted publickey for core from 68.220.241.50 port 35290 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:35.228181 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:35.235972 systemd-logind[1552]: New session 15 of user core. Jan 23 20:21:35.245395 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 20:21:35.751071 sshd[4267]: Connection closed by 68.220.241.50 port 35290 Jan 23 20:21:35.753428 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:35.759516 systemd[1]: sshd@12-10.244.9.250:22-68.220.241.50:35290.service: Deactivated successfully. Jan 23 20:21:35.762841 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 20:21:35.764437 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Jan 23 20:21:35.767332 systemd-logind[1552]: Removed session 15. Jan 23 20:21:35.855967 systemd[1]: Started sshd@13-10.244.9.250:22-68.220.241.50:35294.service - OpenSSH per-connection server daemon (68.220.241.50:35294). Jan 23 20:21:36.451171 sshd[4280]: Accepted publickey for core from 68.220.241.50 port 35294 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:36.453039 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:36.460346 systemd-logind[1552]: New session 16 of user core. Jan 23 20:21:36.473394 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 20:21:37.023118 sshd[4283]: Connection closed by 68.220.241.50 port 35294 Jan 23 20:21:37.024057 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:37.030320 systemd[1]: sshd@13-10.244.9.250:22-68.220.241.50:35294.service: Deactivated successfully. Jan 23 20:21:37.032791 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 20:21:37.033996 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Jan 23 20:21:37.036548 systemd-logind[1552]: Removed session 16. Jan 23 20:21:37.123588 systemd[1]: Started sshd@14-10.244.9.250:22-68.220.241.50:35310.service - OpenSSH per-connection server daemon (68.220.241.50:35310). Jan 23 20:21:37.709669 sshd[4293]: Accepted publickey for core from 68.220.241.50 port 35310 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:37.711610 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:37.719158 systemd-logind[1552]: New session 17 of user core. Jan 23 20:21:37.729443 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 20:21:38.211294 sshd[4296]: Connection closed by 68.220.241.50 port 35310 Jan 23 20:21:38.213012 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:38.219553 systemd[1]: sshd@14-10.244.9.250:22-68.220.241.50:35310.service: Deactivated successfully. Jan 23 20:21:38.223655 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 20:21:38.225849 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Jan 23 20:21:38.228077 systemd-logind[1552]: Removed session 17. Jan 23 20:21:43.318197 systemd[1]: Started sshd@15-10.244.9.250:22-68.220.241.50:38600.service - OpenSSH per-connection server daemon (68.220.241.50:38600). Jan 23 20:21:43.916749 sshd[4310]: Accepted publickey for core from 68.220.241.50 port 38600 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:43.918681 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:43.928152 systemd-logind[1552]: New session 18 of user core. Jan 23 20:21:43.935325 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 20:21:44.444319 sshd[4313]: Connection closed by 68.220.241.50 port 38600 Jan 23 20:21:44.444987 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:44.456369 systemd[1]: sshd@15-10.244.9.250:22-68.220.241.50:38600.service: Deactivated successfully. Jan 23 20:21:44.460337 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 20:21:44.462509 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Jan 23 20:21:44.465127 systemd-logind[1552]: Removed session 18. Jan 23 20:21:49.552156 systemd[1]: Started sshd@16-10.244.9.250:22-68.220.241.50:38606.service - OpenSSH per-connection server daemon (68.220.241.50:38606). Jan 23 20:21:50.169629 sshd[4325]: Accepted publickey for core from 68.220.241.50 port 38606 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:50.170479 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:50.177628 systemd-logind[1552]: New session 19 of user core. Jan 23 20:21:50.191429 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 20:21:50.673117 sshd[4328]: Connection closed by 68.220.241.50 port 38606 Jan 23 20:21:50.672236 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:50.680023 systemd[1]: sshd@16-10.244.9.250:22-68.220.241.50:38606.service: Deactivated successfully. Jan 23 20:21:50.683163 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 20:21:50.684755 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Jan 23 20:21:50.687541 systemd-logind[1552]: Removed session 19. Jan 23 20:21:50.775519 systemd[1]: Started sshd@17-10.244.9.250:22-68.220.241.50:38608.service - OpenSSH per-connection server daemon (68.220.241.50:38608). Jan 23 20:21:51.362243 sshd[4339]: Accepted publickey for core from 68.220.241.50 port 38608 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:51.364400 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:51.373919 systemd-logind[1552]: New session 20 of user core. Jan 23 20:21:51.381324 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 20:21:52.137445 sshd[4342]: Connection closed by 68.220.241.50 port 38608 Jan 23 20:21:52.141269 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:52.158641 systemd[1]: sshd@17-10.244.9.250:22-68.220.241.50:38608.service: Deactivated successfully. Jan 23 20:21:52.162543 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 20:21:52.163847 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Jan 23 20:21:52.166185 systemd-logind[1552]: Removed session 20. Jan 23 20:21:52.240179 systemd[1]: Started sshd@18-10.244.9.250:22-68.220.241.50:38622.service - OpenSSH per-connection server daemon (68.220.241.50:38622). Jan 23 20:21:52.857841 sshd[4352]: Accepted publickey for core from 68.220.241.50 port 38622 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:52.860329 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:52.867979 systemd-logind[1552]: New session 21 of user core. Jan 23 20:21:52.884400 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 20:21:54.125408 sshd[4355]: Connection closed by 68.220.241.50 port 38622 Jan 23 20:21:54.125985 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:54.132752 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Jan 23 20:21:54.133582 systemd[1]: sshd@18-10.244.9.250:22-68.220.241.50:38622.service: Deactivated successfully. Jan 23 20:21:54.137509 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 20:21:54.139994 systemd-logind[1552]: Removed session 21. Jan 23 20:21:54.231180 systemd[1]: Started sshd@19-10.244.9.250:22-68.220.241.50:49820.service - OpenSSH per-connection server daemon (68.220.241.50:49820). Jan 23 20:21:54.823628 sshd[4372]: Accepted publickey for core from 68.220.241.50 port 49820 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:54.826484 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:54.835452 systemd-logind[1552]: New session 22 of user core. Jan 23 20:21:54.840298 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 20:21:55.528074 sshd[4375]: Connection closed by 68.220.241.50 port 49820 Jan 23 20:21:55.529685 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:55.535825 systemd[1]: sshd@19-10.244.9.250:22-68.220.241.50:49820.service: Deactivated successfully. Jan 23 20:21:55.539383 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 20:21:55.541979 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Jan 23 20:21:55.544869 systemd-logind[1552]: Removed session 22. Jan 23 20:21:55.635911 systemd[1]: Started sshd@20-10.244.9.250:22-68.220.241.50:49826.service - OpenSSH per-connection server daemon (68.220.241.50:49826). Jan 23 20:21:56.250135 sshd[4385]: Accepted publickey for core from 68.220.241.50 port 49826 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:21:56.252643 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:21:56.260800 systemd-logind[1552]: New session 23 of user core. Jan 23 20:21:56.269306 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 20:21:56.752176 sshd[4388]: Connection closed by 68.220.241.50 port 49826 Jan 23 20:21:56.753408 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Jan 23 20:21:56.758690 systemd[1]: sshd@20-10.244.9.250:22-68.220.241.50:49826.service: Deactivated successfully. Jan 23 20:21:56.762595 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 20:21:56.766647 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Jan 23 20:21:56.768759 systemd-logind[1552]: Removed session 23. Jan 23 20:22:01.854886 systemd[1]: Started sshd@21-10.244.9.250:22-68.220.241.50:49838.service - OpenSSH per-connection server daemon (68.220.241.50:49838). Jan 23 20:22:02.439806 sshd[4399]: Accepted publickey for core from 68.220.241.50 port 49838 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:02.441953 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:02.453871 systemd-logind[1552]: New session 24 of user core. Jan 23 20:22:02.459508 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 20:22:02.936165 sshd[4404]: Connection closed by 68.220.241.50 port 49838 Jan 23 20:22:02.937122 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:02.943498 systemd[1]: sshd@21-10.244.9.250:22-68.220.241.50:49838.service: Deactivated successfully. Jan 23 20:22:02.946390 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 20:22:02.948511 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. Jan 23 20:22:02.950074 systemd-logind[1552]: Removed session 24. Jan 23 20:22:08.042776 systemd[1]: Started sshd@22-10.244.9.250:22-68.220.241.50:46692.service - OpenSSH per-connection server daemon (68.220.241.50:46692). Jan 23 20:22:08.631136 sshd[4419]: Accepted publickey for core from 68.220.241.50 port 46692 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:08.632809 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:08.641300 systemd-logind[1552]: New session 25 of user core. Jan 23 20:22:08.648345 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 20:22:09.135914 sshd[4422]: Connection closed by 68.220.241.50 port 46692 Jan 23 20:22:09.137302 sshd-session[4419]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:09.144266 systemd[1]: sshd@22-10.244.9.250:22-68.220.241.50:46692.service: Deactivated successfully. Jan 23 20:22:09.149823 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 20:22:09.151620 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. Jan 23 20:22:09.154364 systemd-logind[1552]: Removed session 25. Jan 23 20:22:14.246460 systemd[1]: Started sshd@23-10.244.9.250:22-68.220.241.50:50480.service - OpenSSH per-connection server daemon (68.220.241.50:50480). Jan 23 20:22:14.839293 sshd[4436]: Accepted publickey for core from 68.220.241.50 port 50480 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:14.841251 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:14.848581 systemd-logind[1552]: New session 26 of user core. Jan 23 20:22:14.858361 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 20:22:15.331661 sshd[4439]: Connection closed by 68.220.241.50 port 50480 Jan 23 20:22:15.333361 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:15.338781 systemd[1]: sshd@23-10.244.9.250:22-68.220.241.50:50480.service: Deactivated successfully. Jan 23 20:22:15.342354 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 20:22:15.345150 systemd-logind[1552]: Session 26 logged out. Waiting for processes to exit. Jan 23 20:22:15.346690 systemd-logind[1552]: Removed session 26. Jan 23 20:22:15.443378 systemd[1]: Started sshd@24-10.244.9.250:22-68.220.241.50:50484.service - OpenSSH per-connection server daemon (68.220.241.50:50484). Jan 23 20:22:16.028458 sshd[4450]: Accepted publickey for core from 68.220.241.50 port 50484 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:16.030279 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:16.038159 systemd-logind[1552]: New session 27 of user core. Jan 23 20:22:16.046490 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 20:22:17.976401 containerd[1573]: time="2026-01-23T20:22:17.976282316Z" level=info msg="StopContainer for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" with timeout 30 (s)" Jan 23 20:22:17.977587 containerd[1573]: time="2026-01-23T20:22:17.977413840Z" level=info msg="Stop container \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" with signal terminated" Jan 23 20:22:18.015407 systemd[1]: cri-containerd-f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1.scope: Deactivated successfully. Jan 23 20:22:18.024464 containerd[1573]: time="2026-01-23T20:22:18.024407218Z" level=info msg="received container exit event container_id:\"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" id:\"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" pid:3447 exited_at:{seconds:1769199738 nanos:21500810}" Jan 23 20:22:18.056867 containerd[1573]: time="2026-01-23T20:22:18.056774783Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 20:22:18.067999 containerd[1573]: time="2026-01-23T20:22:18.067529856Z" level=info msg="StopContainer for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" with timeout 2 (s)" Jan 23 20:22:18.068386 containerd[1573]: time="2026-01-23T20:22:18.068042546Z" level=info msg="Stop container \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" with signal terminated" Jan 23 20:22:18.083339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1-rootfs.mount: Deactivated successfully. Jan 23 20:22:18.096766 systemd-networkd[1488]: lxc_health: Link DOWN Jan 23 20:22:18.097361 systemd-networkd[1488]: lxc_health: Lost carrier Jan 23 20:22:18.109575 containerd[1573]: time="2026-01-23T20:22:18.109465228Z" level=info msg="StopContainer for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" returns successfully" Jan 23 20:22:18.113123 containerd[1573]: time="2026-01-23T20:22:18.112943664Z" level=info msg="StopPodSandbox for \"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\"" Jan 23 20:22:18.118893 containerd[1573]: time="2026-01-23T20:22:18.117873932Z" level=info msg="Container to stop \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:22:18.128666 systemd[1]: cri-containerd-dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7.scope: Deactivated successfully. Jan 23 20:22:18.129836 systemd[1]: cri-containerd-dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7.scope: Consumed 10.528s CPU time, 225.7M memory peak, 107M read from disk, 13.3M written to disk. Jan 23 20:22:18.132568 containerd[1573]: time="2026-01-23T20:22:18.130325355Z" level=info msg="received container exit event container_id:\"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" id:\"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" pid:3518 exited_at:{seconds:1769199738 nanos:129603489}" Jan 23 20:22:18.143610 systemd[1]: cri-containerd-80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9.scope: Deactivated successfully. Jan 23 20:22:18.151065 containerd[1573]: time="2026-01-23T20:22:18.150952355Z" level=info msg="received sandbox exit event container_id:\"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" id:\"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" exit_status:137 exited_at:{seconds:1769199738 nanos:150454614}" monitor_name=podsandbox Jan 23 20:22:18.186964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7-rootfs.mount: Deactivated successfully. Jan 23 20:22:18.202312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9-rootfs.mount: Deactivated successfully. Jan 23 20:22:18.205197 containerd[1573]: time="2026-01-23T20:22:18.205009887Z" level=info msg="StopContainer for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" returns successfully" Jan 23 20:22:18.207211 containerd[1573]: time="2026-01-23T20:22:18.207167525Z" level=info msg="shim disconnected" id=80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9 namespace=k8s.io Jan 23 20:22:18.207302 containerd[1573]: time="2026-01-23T20:22:18.207204342Z" level=warning msg="cleaning up after shim disconnected" id=80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9 namespace=k8s.io Jan 23 20:22:18.217097 containerd[1573]: time="2026-01-23T20:22:18.207226954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 20:22:18.217857 containerd[1573]: time="2026-01-23T20:22:18.209123837Z" level=info msg="StopPodSandbox for \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\"" Jan 23 20:22:18.217857 containerd[1573]: time="2026-01-23T20:22:18.217421564Z" level=info msg="Container to stop \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:22:18.217857 containerd[1573]: time="2026-01-23T20:22:18.217444617Z" level=info msg="Container to stop \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:22:18.217857 containerd[1573]: time="2026-01-23T20:22:18.217460320Z" level=info msg="Container to stop \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:22:18.217857 containerd[1573]: time="2026-01-23T20:22:18.217477600Z" level=info msg="Container to stop \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:22:18.217857 containerd[1573]: time="2026-01-23T20:22:18.217494010Z" level=info msg="Container to stop \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:22:18.230859 systemd[1]: cri-containerd-273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b.scope: Deactivated successfully. Jan 23 20:22:18.235914 containerd[1573]: time="2026-01-23T20:22:18.235665729Z" level=info msg="received sandbox exit event container_id:\"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" id:\"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" exit_status:137 exited_at:{seconds:1769199738 nanos:235069365}" monitor_name=podsandbox Jan 23 20:22:18.251139 containerd[1573]: time="2026-01-23T20:22:18.250834944Z" level=info msg="received sandbox container exit event sandbox_id:\"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" exit_status:137 exited_at:{seconds:1769199738 nanos:150454614}" monitor_name=criService Jan 23 20:22:18.255193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9-shm.mount: Deactivated successfully. Jan 23 20:22:18.257736 containerd[1573]: time="2026-01-23T20:22:18.257558009Z" level=info msg="TearDown network for sandbox \"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" successfully" Jan 23 20:22:18.257736 containerd[1573]: time="2026-01-23T20:22:18.257597587Z" level=info msg="StopPodSandbox for \"80609fae7165f201e0d0c4b71b5b0f62197785b87f6b9ee39762acd3ea2d67e9\" returns successfully" Jan 23 20:22:18.294215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b-rootfs.mount: Deactivated successfully. Jan 23 20:22:18.300598 containerd[1573]: time="2026-01-23T20:22:18.300475599Z" level=info msg="shim disconnected" id=273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b namespace=k8s.io Jan 23 20:22:18.301620 containerd[1573]: time="2026-01-23T20:22:18.300686085Z" level=warning msg="cleaning up after shim disconnected" id=273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b namespace=k8s.io Jan 23 20:22:18.301620 containerd[1573]: time="2026-01-23T20:22:18.300740576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 20:22:18.321379 containerd[1573]: time="2026-01-23T20:22:18.321326314Z" level=info msg="TearDown network for sandbox \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" successfully" Jan 23 20:22:18.321379 containerd[1573]: time="2026-01-23T20:22:18.321375443Z" level=info msg="StopPodSandbox for \"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" returns successfully" Jan 23 20:22:18.322011 containerd[1573]: time="2026-01-23T20:22:18.321958507Z" level=info msg="received sandbox container exit event sandbox_id:\"273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b\" exit_status:137 exited_at:{seconds:1769199738 nanos:235069365}" monitor_name=criService Jan 23 20:22:18.406149 kubelet[2879]: I0123 20:22:18.405587 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcbsg\" (UniqueName: \"kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-kube-api-access-zcbsg\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.407357 kubelet[2879]: I0123 20:22:18.407209 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-etc-cni-netd\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.407642 kubelet[2879]: I0123 20:22:18.407619 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-net\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.407919 kubelet[2879]: I0123 20:22:18.407895 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-kernel\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.410211 kubelet[2879]: I0123 20:22:18.410176 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-hubble-tls\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.413198 kubelet[2879]: I0123 20:22:18.413068 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c933f622-cb47-4823-a800-3acb8b64ac71-clustermesh-secrets\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.413557 kubelet[2879]: I0123 20:22:18.413486 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cni-path\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.414430 kubelet[2879]: I0123 20:22:18.413748 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-hostproc\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.414430 kubelet[2879]: I0123 20:22:18.413785 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3764a13a-246f-4d12-bf52-63a9b10730a2-cilium-config-path\") pod \"3764a13a-246f-4d12-bf52-63a9b10730a2\" (UID: \"3764a13a-246f-4d12-bf52-63a9b10730a2\") " Jan 23 20:22:18.414430 kubelet[2879]: I0123 20:22:18.413814 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-xtables-lock\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.414430 kubelet[2879]: I0123 20:22:18.413837 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-run\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.414430 kubelet[2879]: I0123 20:22:18.413863 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-bpf-maps\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.414430 kubelet[2879]: I0123 20:22:18.413891 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-config-path\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.414732 kubelet[2879]: I0123 20:22:18.413921 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgbjx\" (UniqueName: \"kubernetes.io/projected/3764a13a-246f-4d12-bf52-63a9b10730a2-kube-api-access-hgbjx\") pod \"3764a13a-246f-4d12-bf52-63a9b10730a2\" (UID: \"3764a13a-246f-4d12-bf52-63a9b10730a2\") " Jan 23 20:22:18.414732 kubelet[2879]: I0123 20:22:18.413949 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-cgroup\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.415244 kubelet[2879]: I0123 20:22:18.413992 2879 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-lib-modules\") pod \"c933f622-cb47-4823-a800-3acb8b64ac71\" (UID: \"c933f622-cb47-4823-a800-3acb8b64ac71\") " Jan 23 20:22:18.419058 kubelet[2879]: I0123 20:22:18.407908 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419058 kubelet[2879]: I0123 20:22:18.407947 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419058 kubelet[2879]: I0123 20:22:18.407969 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419058 kubelet[2879]: I0123 20:22:18.414354 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-kube-api-access-zcbsg" (OuterVolumeSpecName: "kube-api-access-zcbsg") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "kube-api-access-zcbsg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 20:22:18.419305 kubelet[2879]: I0123 20:22:18.414393 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419305 kubelet[2879]: I0123 20:22:18.415332 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419305 kubelet[2879]: I0123 20:22:18.415354 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419305 kubelet[2879]: I0123 20:22:18.415380 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419305 kubelet[2879]: I0123 20:22:18.417915 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cni-path" (OuterVolumeSpecName: "cni-path") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419514 kubelet[2879]: I0123 20:22:18.417940 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-hostproc" (OuterVolumeSpecName: "hostproc") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.419514 kubelet[2879]: I0123 20:22:18.418048 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:22:18.426115 kubelet[2879]: I0123 20:22:18.425522 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c933f622-cb47-4823-a800-3acb8b64ac71-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 20:22:18.426115 kubelet[2879]: I0123 20:22:18.425660 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 20:22:18.428610 kubelet[2879]: I0123 20:22:18.428439 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c933f622-cb47-4823-a800-3acb8b64ac71" (UID: "c933f622-cb47-4823-a800-3acb8b64ac71"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 20:22:18.430486 kubelet[2879]: I0123 20:22:18.430412 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3764a13a-246f-4d12-bf52-63a9b10730a2-kube-api-access-hgbjx" (OuterVolumeSpecName: "kube-api-access-hgbjx") pod "3764a13a-246f-4d12-bf52-63a9b10730a2" (UID: "3764a13a-246f-4d12-bf52-63a9b10730a2"). InnerVolumeSpecName "kube-api-access-hgbjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 20:22:18.430626 kubelet[2879]: I0123 20:22:18.430582 2879 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3764a13a-246f-4d12-bf52-63a9b10730a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3764a13a-246f-4d12-bf52-63a9b10730a2" (UID: "3764a13a-246f-4d12-bf52-63a9b10730a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518380 2879 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-kernel\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518441 2879 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-hubble-tls\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518461 2879 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c933f622-cb47-4823-a800-3acb8b64ac71-clustermesh-secrets\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518490 2879 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cni-path\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518506 2879 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-hostproc\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518520 2879 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3764a13a-246f-4d12-bf52-63a9b10730a2-cilium-config-path\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518534 2879 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-xtables-lock\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.518567 kubelet[2879]: I0123 20:22:18.518560 2879 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-run\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518594 2879 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-bpf-maps\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518610 2879 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-config-path\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518624 2879 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgbjx\" (UniqueName: \"kubernetes.io/projected/3764a13a-246f-4d12-bf52-63a9b10730a2-kube-api-access-hgbjx\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518638 2879 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-cilium-cgroup\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518652 2879 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-lib-modules\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518675 2879 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zcbsg\" (UniqueName: \"kubernetes.io/projected/c933f622-cb47-4823-a800-3acb8b64ac71-kube-api-access-zcbsg\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518688 2879 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-etc-cni-netd\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.520128 kubelet[2879]: I0123 20:22:18.518703 2879 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c933f622-cb47-4823-a800-3acb8b64ac71-host-proc-sys-net\") on node \"srv-1diuq.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:22:18.675158 kubelet[2879]: I0123 20:22:18.673629 2879 scope.go:117] "RemoveContainer" containerID="f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1" Jan 23 20:22:18.680685 containerd[1573]: time="2026-01-23T20:22:18.679899396Z" level=info msg="RemoveContainer for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\"" Jan 23 20:22:18.690927 systemd[1]: Removed slice kubepods-besteffort-pod3764a13a_246f_4d12_bf52_63a9b10730a2.slice - libcontainer container kubepods-besteffort-pod3764a13a_246f_4d12_bf52_63a9b10730a2.slice. Jan 23 20:22:18.702782 systemd[1]: Removed slice kubepods-burstable-podc933f622_cb47_4823_a800_3acb8b64ac71.slice - libcontainer container kubepods-burstable-podc933f622_cb47_4823_a800_3acb8b64ac71.slice. Jan 23 20:22:18.702942 systemd[1]: kubepods-burstable-podc933f622_cb47_4823_a800_3acb8b64ac71.slice: Consumed 10.708s CPU time, 226.1M memory peak, 108.1M read from disk, 16.6M written to disk. Jan 23 20:22:18.705129 kubelet[2879]: I0123 20:22:18.704913 2879 scope.go:117] "RemoveContainer" containerID="f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1" Jan 23 20:22:18.705215 containerd[1573]: time="2026-01-23T20:22:18.703901003Z" level=info msg="RemoveContainer for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" returns successfully" Jan 23 20:22:18.705882 containerd[1573]: time="2026-01-23T20:22:18.705462418Z" level=error msg="ContainerStatus for \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\": not found" Jan 23 20:22:18.708961 kubelet[2879]: E0123 20:22:18.706929 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\": not found" containerID="f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1" Jan 23 20:22:18.708961 kubelet[2879]: I0123 20:22:18.707012 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1"} err="failed to get container status \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9935495346afc0cc2540453bce29508f794e02b1e6a1d8416f642a3c1a8bbb1\": not found" Jan 23 20:22:18.708961 kubelet[2879]: I0123 20:22:18.707191 2879 scope.go:117] "RemoveContainer" containerID="dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7" Jan 23 20:22:18.709389 containerd[1573]: time="2026-01-23T20:22:18.709043284Z" level=info msg="RemoveContainer for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\"" Jan 23 20:22:18.741126 containerd[1573]: time="2026-01-23T20:22:18.741051474Z" level=info msg="RemoveContainer for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" returns successfully" Jan 23 20:22:18.742218 kubelet[2879]: I0123 20:22:18.742182 2879 scope.go:117] "RemoveContainer" containerID="1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e" Jan 23 20:22:18.745765 containerd[1573]: time="2026-01-23T20:22:18.745690493Z" level=info msg="RemoveContainer for \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\"" Jan 23 20:22:18.756190 containerd[1573]: time="2026-01-23T20:22:18.756137091Z" level=info msg="RemoveContainer for \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\" returns successfully" Jan 23 20:22:18.756543 kubelet[2879]: I0123 20:22:18.756487 2879 scope.go:117] "RemoveContainer" containerID="ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761" Jan 23 20:22:18.759577 containerd[1573]: time="2026-01-23T20:22:18.759466809Z" level=info msg="RemoveContainer for \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\"" Jan 23 20:22:18.765645 containerd[1573]: time="2026-01-23T20:22:18.765601778Z" level=info msg="RemoveContainer for \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\" returns successfully" Jan 23 20:22:18.766186 kubelet[2879]: I0123 20:22:18.766150 2879 scope.go:117] "RemoveContainer" containerID="59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de" Jan 23 20:22:18.768358 containerd[1573]: time="2026-01-23T20:22:18.768328926Z" level=info msg="RemoveContainer for \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\"" Jan 23 20:22:18.772994 containerd[1573]: time="2026-01-23T20:22:18.772116681Z" level=info msg="RemoveContainer for \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\" returns successfully" Jan 23 20:22:18.773382 kubelet[2879]: I0123 20:22:18.773342 2879 scope.go:117] "RemoveContainer" containerID="86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272" Jan 23 20:22:18.775164 containerd[1573]: time="2026-01-23T20:22:18.775100287Z" level=info msg="RemoveContainer for \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\"" Jan 23 20:22:18.778555 containerd[1573]: time="2026-01-23T20:22:18.778522335Z" level=info msg="RemoveContainer for \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\" returns successfully" Jan 23 20:22:18.779025 kubelet[2879]: I0123 20:22:18.778913 2879 scope.go:117] "RemoveContainer" containerID="dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7" Jan 23 20:22:18.779627 containerd[1573]: time="2026-01-23T20:22:18.779264869Z" level=error msg="ContainerStatus for \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\": not found" Jan 23 20:22:18.779709 kubelet[2879]: E0123 20:22:18.779448 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\": not found" containerID="dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7" Jan 23 20:22:18.779709 kubelet[2879]: I0123 20:22:18.779484 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7"} err="failed to get container status \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"dba92048beb0b8fbaf4ea1dde08ea18d2eb2a8a98ed6a928478ff3784c6b27c7\": not found" Jan 23 20:22:18.779709 kubelet[2879]: I0123 20:22:18.779514 2879 scope.go:117] "RemoveContainer" containerID="1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e" Jan 23 20:22:18.779863 containerd[1573]: time="2026-01-23T20:22:18.779735876Z" level=error msg="ContainerStatus for \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\": not found" Jan 23 20:22:18.780038 kubelet[2879]: E0123 20:22:18.780007 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\": not found" containerID="1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e" Jan 23 20:22:18.780281 kubelet[2879]: I0123 20:22:18.780147 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e"} err="failed to get container status \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1222302d1c6fd7c3373e8770b3b039614e455d10be08f9752caeb428d5e3da5e\": not found" Jan 23 20:22:18.780281 kubelet[2879]: I0123 20:22:18.780179 2879 scope.go:117] "RemoveContainer" containerID="ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761" Jan 23 20:22:18.780772 containerd[1573]: time="2026-01-23T20:22:18.780705587Z" level=error msg="ContainerStatus for \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\": not found" Jan 23 20:22:18.780906 kubelet[2879]: E0123 20:22:18.780858 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\": not found" containerID="ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761" Jan 23 20:22:18.780906 kubelet[2879]: I0123 20:22:18.780888 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761"} err="failed to get container status \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff9244cbe650f2a5843e1d897c4b850b825104c04c3e07ab147d2817bc5a2761\": not found" Jan 23 20:22:18.781190 kubelet[2879]: I0123 20:22:18.780912 2879 scope.go:117] "RemoveContainer" containerID="59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de" Jan 23 20:22:18.781246 containerd[1573]: time="2026-01-23T20:22:18.781187978Z" level=error msg="ContainerStatus for \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\": not found" Jan 23 20:22:18.781535 kubelet[2879]: E0123 20:22:18.781401 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\": not found" containerID="59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de" Jan 23 20:22:18.781535 kubelet[2879]: I0123 20:22:18.781432 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de"} err="failed to get container status \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\": rpc error: code = NotFound desc = an error occurred when try to find container \"59c338c8574dd64cbfc5b0dbc1f27cb4e4fa9ab8d2dbd3b0ae9ac879b2e170de\": not found" Jan 23 20:22:18.781535 kubelet[2879]: I0123 20:22:18.781456 2879 scope.go:117] "RemoveContainer" containerID="86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272" Jan 23 20:22:18.781892 containerd[1573]: time="2026-01-23T20:22:18.781801622Z" level=error msg="ContainerStatus for \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\": not found" Jan 23 20:22:18.782052 kubelet[2879]: E0123 20:22:18.781995 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\": not found" containerID="86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272" Jan 23 20:22:18.782151 kubelet[2879]: I0123 20:22:18.782033 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272"} err="failed to get container status \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\": rpc error: code = NotFound desc = an error occurred when try to find container \"86fd45e79308a740b03c5426c9ba8d61b862802e8062a582a3b8e2b4dc7b7272\": not found" Jan 23 20:22:19.077036 kubelet[2879]: I0123 20:22:19.076867 2879 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3764a13a-246f-4d12-bf52-63a9b10730a2" path="/var/lib/kubelet/pods/3764a13a-246f-4d12-bf52-63a9b10730a2/volumes" Jan 23 20:22:19.080743 kubelet[2879]: I0123 20:22:19.078418 2879 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c933f622-cb47-4823-a800-3acb8b64ac71" path="/var/lib/kubelet/pods/c933f622-cb47-4823-a800-3acb8b64ac71/volumes" Jan 23 20:22:19.082301 systemd[1]: var-lib-kubelet-pods-3764a13a\x2d246f\x2d4d12\x2dbf52\x2d63a9b10730a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgbjx.mount: Deactivated successfully. Jan 23 20:22:19.082482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-273892e4e49f276201267eb987e0e8af2a3c5a5e8b94604d688147249719eb6b-shm.mount: Deactivated successfully. Jan 23 20:22:19.082620 systemd[1]: var-lib-kubelet-pods-c933f622\x2dcb47\x2d4823\x2da800\x2d3acb8b64ac71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzcbsg.mount: Deactivated successfully. Jan 23 20:22:19.082729 systemd[1]: var-lib-kubelet-pods-c933f622\x2dcb47\x2d4823\x2da800\x2d3acb8b64ac71-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 20:22:19.082838 systemd[1]: var-lib-kubelet-pods-c933f622\x2dcb47\x2d4823\x2da800\x2d3acb8b64ac71-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 20:22:19.979130 sshd[4453]: Connection closed by 68.220.241.50 port 50484 Jan 23 20:22:19.979517 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:19.986212 systemd[1]: sshd@24-10.244.9.250:22-68.220.241.50:50484.service: Deactivated successfully. Jan 23 20:22:19.989751 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 20:22:19.992416 systemd-logind[1552]: Session 27 logged out. Waiting for processes to exit. Jan 23 20:22:19.994506 systemd-logind[1552]: Removed session 27. Jan 23 20:22:20.085282 systemd[1]: Started sshd@25-10.244.9.250:22-68.220.241.50:50500.service - OpenSSH per-connection server daemon (68.220.241.50:50500). Jan 23 20:22:20.271459 kubelet[2879]: E0123 20:22:20.270851 2879 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 20:22:20.692012 sshd[4601]: Accepted publickey for core from 68.220.241.50 port 50500 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:20.694262 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:20.704347 systemd-logind[1552]: New session 28 of user core. Jan 23 20:22:20.713300 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 20:22:21.991623 kubelet[2879]: I0123 20:22:21.991555 2879 memory_manager.go:355] "RemoveStaleState removing state" podUID="c933f622-cb47-4823-a800-3acb8b64ac71" containerName="cilium-agent" Jan 23 20:22:21.991623 kubelet[2879]: I0123 20:22:21.991600 2879 memory_manager.go:355] "RemoveStaleState removing state" podUID="3764a13a-246f-4d12-bf52-63a9b10730a2" containerName="cilium-operator" Jan 23 20:22:22.004895 systemd[1]: Created slice kubepods-burstable-podcdf7d87f_f7eb_4d01_9551_e099b2b0b9ce.slice - libcontainer container kubepods-burstable-podcdf7d87f_f7eb_4d01_9551_e099b2b0b9ce.slice. Jan 23 20:22:22.017512 kubelet[2879]: I0123 20:22:22.017309 2879 status_manager.go:890] "Failed to get status for pod" podUID="cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce" pod="kube-system/cilium-npnp7" err="pods \"cilium-npnp7\" is forbidden: User \"system:node:srv-1diuq.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-1diuq.gb1.brightbox.com' and this object" Jan 23 20:22:22.019591 kubelet[2879]: W0123 20:22:22.019073 2879 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-1diuq.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-1diuq.gb1.brightbox.com' and this object Jan 23 20:22:22.019591 kubelet[2879]: E0123 20:22:22.019147 2879 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-1diuq.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-1diuq.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 23 20:22:22.020005 kubelet[2879]: W0123 20:22:22.019959 2879 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-1diuq.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-1diuq.gb1.brightbox.com' and this object Jan 23 20:22:22.020179 kubelet[2879]: E0123 20:22:22.020143 2879 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-1diuq.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-1diuq.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 23 20:22:22.054876 sshd[4604]: Connection closed by 68.220.241.50 port 50500 Jan 23 20:22:22.057378 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:22.064924 systemd[1]: sshd@25-10.244.9.250:22-68.220.241.50:50500.service: Deactivated successfully. Jan 23 20:22:22.069906 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 20:22:22.071558 systemd-logind[1552]: Session 28 logged out. Waiting for processes to exit. Jan 23 20:22:22.076428 systemd-logind[1552]: Removed session 28. Jan 23 20:22:22.145514 kubelet[2879]: I0123 20:22:22.144524 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-xtables-lock\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145514 kubelet[2879]: I0123 20:22:22.144590 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-cni-path\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145514 kubelet[2879]: I0123 20:22:22.144629 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-cilium-config-path\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145514 kubelet[2879]: I0123 20:22:22.144656 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhv6p\" (UniqueName: \"kubernetes.io/projected/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-kube-api-access-rhv6p\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145514 kubelet[2879]: I0123 20:22:22.144686 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-clustermesh-secrets\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145850 kubelet[2879]: I0123 20:22:22.144712 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-host-proc-sys-kernel\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145850 kubelet[2879]: I0123 20:22:22.144737 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-lib-modules\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145850 kubelet[2879]: I0123 20:22:22.144766 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-cilium-run\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145850 kubelet[2879]: I0123 20:22:22.144812 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-hostproc\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145850 kubelet[2879]: I0123 20:22:22.144848 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-hubble-tls\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.145850 kubelet[2879]: I0123 20:22:22.144881 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-cilium-ipsec-secrets\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.146196 kubelet[2879]: I0123 20:22:22.144910 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-host-proc-sys-net\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.146196 kubelet[2879]: I0123 20:22:22.144937 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-etc-cni-netd\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.146196 kubelet[2879]: I0123 20:22:22.144962 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-bpf-maps\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.146196 kubelet[2879]: I0123 20:22:22.145006 2879 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-cilium-cgroup\") pod \"cilium-npnp7\" (UID: \"cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce\") " pod="kube-system/cilium-npnp7" Jan 23 20:22:22.159925 systemd[1]: Started sshd@26-10.244.9.250:22-68.220.241.50:50506.service - OpenSSH per-connection server daemon (68.220.241.50:50506). Jan 23 20:22:22.754359 sshd[4614]: Accepted publickey for core from 68.220.241.50 port 50506 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:22.756267 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:22.764366 systemd-logind[1552]: New session 29 of user core. Jan 23 20:22:22.779524 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 20:22:23.153958 sshd[4620]: Connection closed by 68.220.241.50 port 50506 Jan 23 20:22:23.155071 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:23.161175 systemd[1]: sshd@26-10.244.9.250:22-68.220.241.50:50506.service: Deactivated successfully. Jan 23 20:22:23.164354 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 20:22:23.166809 systemd-logind[1552]: Session 29 logged out. Waiting for processes to exit. Jan 23 20:22:23.169279 systemd-logind[1552]: Removed session 29. Jan 23 20:22:23.252626 kubelet[2879]: E0123 20:22:23.252341 2879 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 23 20:22:23.253243 kubelet[2879]: E0123 20:22:23.253195 2879 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-clustermesh-secrets podName:cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce nodeName:}" failed. No retries permitted until 2026-01-23 20:22:23.752481762 +0000 UTC m=+139.005046400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce-clustermesh-secrets") pod "cilium-npnp7" (UID: "cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce") : failed to sync secret cache: timed out waiting for the condition Jan 23 20:22:23.259763 systemd[1]: Started sshd@27-10.244.9.250:22-68.220.241.50:41646.service - OpenSSH per-connection server daemon (68.220.241.50:41646). Jan 23 20:22:23.813454 containerd[1573]: time="2026-01-23T20:22:23.812803951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npnp7,Uid:cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce,Namespace:kube-system,Attempt:0,}" Jan 23 20:22:23.844357 containerd[1573]: time="2026-01-23T20:22:23.844297485Z" level=info msg="connecting to shim ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7" address="unix:///run/containerd/s/836f4be1a0a3c343076954995a8f76cefb38a443829dceb9ee015d688e0f3d6b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:22:23.848053 sshd[4627]: Accepted publickey for core from 68.220.241.50 port 41646 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:22:23.850998 sshd-session[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:22:23.861153 systemd-logind[1552]: New session 30 of user core. Jan 23 20:22:23.867285 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 20:22:23.902376 systemd[1]: Started cri-containerd-ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7.scope - libcontainer container ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7. Jan 23 20:22:23.945432 containerd[1573]: time="2026-01-23T20:22:23.945368766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npnp7,Uid:cdf7d87f-f7eb-4d01-9551-e099b2b0b9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\"" Jan 23 20:22:23.951143 containerd[1573]: time="2026-01-23T20:22:23.951076266Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 20:22:23.965585 containerd[1573]: time="2026-01-23T20:22:23.965507364Z" level=info msg="Container 8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:22:23.975460 containerd[1573]: time="2026-01-23T20:22:23.975420651Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65\"" Jan 23 20:22:23.977417 containerd[1573]: time="2026-01-23T20:22:23.977385432Z" level=info msg="StartContainer for \"8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65\"" Jan 23 20:22:23.979281 containerd[1573]: time="2026-01-23T20:22:23.979211178Z" level=info msg="connecting to shim 8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65" address="unix:///run/containerd/s/836f4be1a0a3c343076954995a8f76cefb38a443829dceb9ee015d688e0f3d6b" protocol=ttrpc version=3 Jan 23 20:22:24.005316 systemd[1]: Started cri-containerd-8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65.scope - libcontainer container 8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65. Jan 23 20:22:24.055588 containerd[1573]: time="2026-01-23T20:22:24.055458103Z" level=info msg="StartContainer for \"8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65\" returns successfully" Jan 23 20:22:24.073582 systemd[1]: cri-containerd-8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65.scope: Deactivated successfully. Jan 23 20:22:24.074073 systemd[1]: cri-containerd-8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65.scope: Consumed 35ms CPU time, 9.5M memory peak, 3M read from disk. Jan 23 20:22:24.079121 containerd[1573]: time="2026-01-23T20:22:24.079049336Z" level=info msg="received container exit event container_id:\"8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65\" id:\"8ec742d6ab6d49dbf6126e0791794b89d7c6af946b37cf99796bed02b80e6c65\" pid:4690 exited_at:{seconds:1769199744 nanos:78529849}" Jan 23 20:22:24.713248 containerd[1573]: time="2026-01-23T20:22:24.713184454Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 20:22:24.734790 containerd[1573]: time="2026-01-23T20:22:24.734633484Z" level=info msg="Container 0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:22:24.742604 containerd[1573]: time="2026-01-23T20:22:24.742564103Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7\"" Jan 23 20:22:24.743689 containerd[1573]: time="2026-01-23T20:22:24.743648840Z" level=info msg="StartContainer for \"0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7\"" Jan 23 20:22:24.746920 containerd[1573]: time="2026-01-23T20:22:24.746874663Z" level=info msg="connecting to shim 0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7" address="unix:///run/containerd/s/836f4be1a0a3c343076954995a8f76cefb38a443829dceb9ee015d688e0f3d6b" protocol=ttrpc version=3 Jan 23 20:22:24.768467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89762374.mount: Deactivated successfully. Jan 23 20:22:24.786338 systemd[1]: Started cri-containerd-0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7.scope - libcontainer container 0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7. Jan 23 20:22:24.842734 containerd[1573]: time="2026-01-23T20:22:24.842640933Z" level=info msg="StartContainer for \"0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7\" returns successfully" Jan 23 20:22:24.857071 systemd[1]: cri-containerd-0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7.scope: Deactivated successfully. Jan 23 20:22:24.857542 systemd[1]: cri-containerd-0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7.scope: Consumed 32ms CPU time, 7.4M memory peak, 2M read from disk. Jan 23 20:22:24.859208 containerd[1573]: time="2026-01-23T20:22:24.859155654Z" level=info msg="received container exit event container_id:\"0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7\" id:\"0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7\" pid:4743 exited_at:{seconds:1769199744 nanos:857807423}" Jan 23 20:22:24.892566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e62095135301b2f369e82aa367d6fa15e523365b7ed384f8bbb4c53622e1ec7-rootfs.mount: Deactivated successfully. Jan 23 20:22:25.272410 kubelet[2879]: E0123 20:22:25.272317 2879 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 20:22:25.722305 containerd[1573]: time="2026-01-23T20:22:25.722229842Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 20:22:25.740915 containerd[1573]: time="2026-01-23T20:22:25.740853500Z" level=info msg="Container db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:22:25.763528 containerd[1573]: time="2026-01-23T20:22:25.763324039Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f\"" Jan 23 20:22:25.765289 containerd[1573]: time="2026-01-23T20:22:25.765115935Z" level=info msg="StartContainer for \"db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f\"" Jan 23 20:22:25.768558 containerd[1573]: time="2026-01-23T20:22:25.768525212Z" level=info msg="connecting to shim db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f" address="unix:///run/containerd/s/836f4be1a0a3c343076954995a8f76cefb38a443829dceb9ee015d688e0f3d6b" protocol=ttrpc version=3 Jan 23 20:22:25.805368 systemd[1]: Started cri-containerd-db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f.scope - libcontainer container db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f. Jan 23 20:22:25.944148 containerd[1573]: time="2026-01-23T20:22:25.943375154Z" level=info msg="StartContainer for \"db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f\" returns successfully" Jan 23 20:22:25.950980 systemd[1]: cri-containerd-db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f.scope: Deactivated successfully. Jan 23 20:22:25.959750 containerd[1573]: time="2026-01-23T20:22:25.958753036Z" level=info msg="received container exit event container_id:\"db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f\" id:\"db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f\" pid:4787 exited_at:{seconds:1769199745 nanos:957979641}" Jan 23 20:22:25.997721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db609d358c1750d3c2ec4d07b4812e494e15422b83be7c1e0203c2607cb5cb6f-rootfs.mount: Deactivated successfully. Jan 23 20:22:26.730764 containerd[1573]: time="2026-01-23T20:22:26.730528164Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 20:22:26.750038 containerd[1573]: time="2026-01-23T20:22:26.749981660Z" level=info msg="Container 52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:22:26.766174 containerd[1573]: time="2026-01-23T20:22:26.766055317Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1\"" Jan 23 20:22:26.768593 containerd[1573]: time="2026-01-23T20:22:26.768559494Z" level=info msg="StartContainer for \"52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1\"" Jan 23 20:22:26.770346 containerd[1573]: time="2026-01-23T20:22:26.770311529Z" level=info msg="connecting to shim 52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1" address="unix:///run/containerd/s/836f4be1a0a3c343076954995a8f76cefb38a443829dceb9ee015d688e0f3d6b" protocol=ttrpc version=3 Jan 23 20:22:26.807489 systemd[1]: Started cri-containerd-52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1.scope - libcontainer container 52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1. Jan 23 20:22:26.859112 systemd[1]: cri-containerd-52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1.scope: Deactivated successfully. Jan 23 20:22:26.864147 containerd[1573]: time="2026-01-23T20:22:26.864034189Z" level=info msg="received container exit event container_id:\"52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1\" id:\"52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1\" pid:4826 exited_at:{seconds:1769199746 nanos:863259709}" Jan 23 20:22:26.870620 containerd[1573]: time="2026-01-23T20:22:26.870393336Z" level=info msg="StartContainer for \"52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1\" returns successfully" Jan 23 20:22:26.907486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b696361bbcd9ba036654a8dc769dad857cc28140d4e56b4251e17531615ea1-rootfs.mount: Deactivated successfully. Jan 23 20:22:27.737198 containerd[1573]: time="2026-01-23T20:22:27.736858913Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 20:22:27.753851 containerd[1573]: time="2026-01-23T20:22:27.753802061Z" level=info msg="Container f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:22:27.761583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645490174.mount: Deactivated successfully. Jan 23 20:22:27.770579 containerd[1573]: time="2026-01-23T20:22:27.768844646Z" level=info msg="CreateContainer within sandbox \"ddfa1fe826a4981ae26b1d904cbc16b102b4c5d3e313bf7582245b73b0409fb7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba\"" Jan 23 20:22:27.771935 containerd[1573]: time="2026-01-23T20:22:27.771007630Z" level=info msg="StartContainer for \"f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba\"" Jan 23 20:22:27.772806 containerd[1573]: time="2026-01-23T20:22:27.772773653Z" level=info msg="connecting to shim f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba" address="unix:///run/containerd/s/836f4be1a0a3c343076954995a8f76cefb38a443829dceb9ee015d688e0f3d6b" protocol=ttrpc version=3 Jan 23 20:22:27.815381 systemd[1]: Started cri-containerd-f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba.scope - libcontainer container f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba. Jan 23 20:22:27.884195 containerd[1573]: time="2026-01-23T20:22:27.884071164Z" level=info msg="StartContainer for \"f644854d0e81d9522d2163d38b23ac4fa74a0f94de2e7acd5cc4f474fcf79eba\" returns successfully" Jan 23 20:22:28.150241 kubelet[2879]: I0123 20:22:28.148791 2879 setters.go:602] "Node became not ready" node="srv-1diuq.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T20:22:28Z","lastTransitionTime":"2026-01-23T20:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 20:22:28.754436 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 20:22:28.804516 kubelet[2879]: I0123 20:22:28.803529 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-npnp7" podStartSLOduration=7.803496996 podStartE2EDuration="7.803496996s" podCreationTimestamp="2026-01-23 20:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:22:28.80054814 +0000 UTC m=+144.053112792" watchObservedRunningTime="2026-01-23 20:22:28.803496996 +0000 UTC m=+144.056061653" Jan 23 20:22:32.798258 systemd-networkd[1488]: lxc_health: Link UP Jan 23 20:22:32.822161 systemd-networkd[1488]: lxc_health: Gained carrier Jan 23 20:22:34.717282 systemd-networkd[1488]: lxc_health: Gained IPv6LL Jan 23 20:22:37.936640 sshd[4657]: Connection closed by 68.220.241.50 port 41646 Jan 23 20:22:37.940418 sshd-session[4627]: pam_unix(sshd:session): session closed for user core Jan 23 20:22:37.948874 systemd[1]: sshd@27-10.244.9.250:22-68.220.241.50:41646.service: Deactivated successfully. Jan 23 20:22:37.955266 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 20:22:37.958481 systemd-logind[1552]: Session 30 logged out. Waiting for processes to exit. Jan 23 20:22:37.961873 systemd-logind[1552]: Removed session 30.