Jan 28 06:18:47.951728 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:30:15 -00 2026 Jan 28 06:18:47.951770 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 06:18:47.951791 kernel: BIOS-provided physical RAM map: Jan 28 06:18:47.951801 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 06:18:47.951816 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 06:18:47.951826 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 06:18:47.951837 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 28 06:18:47.951857 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 28 06:18:47.951867 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 06:18:47.951877 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 06:18:47.951888 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 06:18:47.951898 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 06:18:47.951908 kernel: NX (Execute Disable) protection: active Jan 28 06:18:47.951924 kernel: APIC: Static calls initialized Jan 28 06:18:47.951936 kernel: SMBIOS 2.8 present. Jan 28 06:18:47.951947 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 28 06:18:47.951964 kernel: DMI: Memory slots populated: 1/1 Jan 28 06:18:47.951976 kernel: Hypervisor detected: KVM Jan 28 06:18:47.951986 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 06:18:47.952002 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 06:18:47.952013 kernel: kvm-clock: using sched offset of 6746493534 cycles Jan 28 06:18:47.952025 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 06:18:47.952036 kernel: tsc: Detected 2799.998 MHz processor Jan 28 06:18:47.952048 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 06:18:47.952100 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 06:18:47.952118 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 06:18:47.952129 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 06:18:47.952140 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 06:18:47.952158 kernel: Using GB pages for direct mapping Jan 28 06:18:47.952169 kernel: ACPI: Early table checksum verification disabled Jan 28 06:18:47.952180 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 28 06:18:47.952201 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952213 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952224 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952235 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 28 06:18:47.952246 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952257 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952273 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952284 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:18:47.952296 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 28 06:18:47.952313 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 28 06:18:47.952325 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 28 06:18:47.952336 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 28 06:18:47.952352 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 28 06:18:47.952363 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 28 06:18:47.952375 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 28 06:18:47.952386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 28 06:18:47.952398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 28 06:18:47.952409 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 28 06:18:47.952421 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 28 06:18:47.952432 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 28 06:18:47.952448 kernel: Zone ranges: Jan 28 06:18:47.952460 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 06:18:47.952475 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 28 06:18:47.952487 kernel: Normal empty Jan 28 06:18:47.952498 kernel: Device empty Jan 28 06:18:47.952509 kernel: Movable zone start for each node Jan 28 06:18:47.952534 kernel: Early memory node ranges Jan 28 06:18:47.952546 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 06:18:47.952558 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 28 06:18:47.952574 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 28 06:18:47.952586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 06:18:47.952604 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 06:18:47.952616 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 28 06:18:47.952627 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 06:18:47.952643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 06:18:47.952664 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 06:18:47.952676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 06:18:47.952687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 06:18:47.952699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 06:18:47.952721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 06:18:47.952733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 06:18:47.952744 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 06:18:47.952756 kernel: TSC deadline timer available Jan 28 06:18:47.952767 kernel: CPU topo: Max. logical packages: 16 Jan 28 06:18:47.952778 kernel: CPU topo: Max. logical dies: 16 Jan 28 06:18:47.952790 kernel: CPU topo: Max. dies per package: 1 Jan 28 06:18:47.952801 kernel: CPU topo: Max. threads per core: 1 Jan 28 06:18:47.952812 kernel: CPU topo: Num. cores per package: 1 Jan 28 06:18:47.952828 kernel: CPU topo: Num. threads per package: 1 Jan 28 06:18:47.952840 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 28 06:18:47.954559 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 06:18:47.954591 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 06:18:47.954603 kernel: Booting paravirtualized kernel on KVM Jan 28 06:18:47.954616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 06:18:47.954628 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 28 06:18:47.954640 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 28 06:18:47.954651 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 28 06:18:47.954671 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 28 06:18:47.954682 kernel: kvm-guest: PV spinlocks enabled Jan 28 06:18:47.954694 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 06:18:47.954708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 06:18:47.954720 kernel: random: crng init done Jan 28 06:18:47.954732 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 06:18:47.954744 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 06:18:47.954755 kernel: Fallback order for Node 0: 0 Jan 28 06:18:47.954771 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 28 06:18:47.954783 kernel: Policy zone: DMA32 Jan 28 06:18:47.954795 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 06:18:47.954807 kernel: software IO TLB: area num 16. Jan 28 06:18:47.954819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 28 06:18:47.954831 kernel: Kernel/User page tables isolation: enabled Jan 28 06:18:47.954842 kernel: ftrace: allocating 40097 entries in 157 pages Jan 28 06:18:47.954854 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 06:18:47.954865 kernel: Dynamic Preempt: voluntary Jan 28 06:18:47.954882 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 06:18:47.954904 kernel: rcu: RCU event tracing is enabled. Jan 28 06:18:47.954916 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 28 06:18:47.954928 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 06:18:47.954947 kernel: Rude variant of Tasks RCU enabled. Jan 28 06:18:47.954967 kernel: Tracing variant of Tasks RCU enabled. Jan 28 06:18:47.954979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 06:18:47.954990 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 28 06:18:47.955002 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 06:18:47.955020 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 06:18:47.955032 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 06:18:47.955044 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 28 06:18:47.955055 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 06:18:47.955079 kernel: Console: colour VGA+ 80x25 Jan 28 06:18:47.955095 kernel: printk: legacy console [tty0] enabled Jan 28 06:18:47.955107 kernel: printk: legacy console [ttyS0] enabled Jan 28 06:18:47.955119 kernel: ACPI: Core revision 20240827 Jan 28 06:18:47.955137 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 06:18:47.955150 kernel: x2apic enabled Jan 28 06:18:47.955162 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 06:18:47.955174 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 28 06:18:47.955203 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 28 06:18:47.955216 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 06:18:47.955228 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 06:18:47.955240 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 06:18:47.955252 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 06:18:47.955270 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 06:18:47.955282 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 06:18:47.955294 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 28 06:18:47.955306 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 28 06:18:47.955318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 28 06:18:47.955329 kernel: MDS: Mitigation: Clear CPU buffers Jan 28 06:18:47.955341 kernel: MMIO Stale Data: Unknown: No mitigations Jan 28 06:18:47.955353 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 28 06:18:47.955365 kernel: active return thunk: its_return_thunk Jan 28 06:18:47.955376 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 06:18:47.955389 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 06:18:47.955406 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 06:18:47.955418 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 06:18:47.955429 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 06:18:47.955441 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 28 06:18:47.955453 kernel: Freeing SMP alternatives memory: 32K Jan 28 06:18:47.955473 kernel: pid_max: default: 32768 minimum: 301 Jan 28 06:18:47.955485 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 06:18:47.955496 kernel: landlock: Up and running. Jan 28 06:18:47.955508 kernel: SELinux: Initializing. Jan 28 06:18:47.959635 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 06:18:47.959660 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 06:18:47.959701 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 28 06:18:47.959714 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 28 06:18:47.959727 kernel: signal: max sigframe size: 1776 Jan 28 06:18:47.959747 kernel: rcu: Hierarchical SRCU implementation. Jan 28 06:18:47.959761 kernel: rcu: Max phase no-delay instances is 400. Jan 28 06:18:47.959774 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 28 06:18:47.959786 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 06:18:47.959798 kernel: smp: Bringing up secondary CPUs ... Jan 28 06:18:47.959811 kernel: smpboot: x86: Booting SMP configuration: Jan 28 06:18:47.959828 kernel: .... node #0, CPUs: #1 Jan 28 06:18:47.959841 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 06:18:47.959853 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 28 06:18:47.959866 kernel: Memory: 1887484K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 203116K reserved, 0K cma-reserved) Jan 28 06:18:47.959879 kernel: devtmpfs: initialized Jan 28 06:18:47.959891 kernel: x86/mm: Memory block size: 128MB Jan 28 06:18:47.959904 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 06:18:47.959916 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 28 06:18:47.959928 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 06:18:47.959945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 06:18:47.959958 kernel: audit: initializing netlink subsys (disabled) Jan 28 06:18:47.960007 kernel: audit: type=2000 audit(1769581124.248:1): state=initialized audit_enabled=0 res=1 Jan 28 06:18:47.960020 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 06:18:47.960032 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 06:18:47.960044 kernel: cpuidle: using governor menu Jan 28 06:18:47.960056 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 06:18:47.960068 kernel: dca service started, version 1.12.1 Jan 28 06:18:47.960080 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 06:18:47.960108 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 06:18:47.960120 kernel: PCI: Using configuration type 1 for base access Jan 28 06:18:47.960133 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 06:18:47.960145 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 06:18:47.960157 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 06:18:47.960176 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 06:18:47.960198 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 06:18:47.960211 kernel: ACPI: Added _OSI(Module Device) Jan 28 06:18:47.960223 kernel: ACPI: Added _OSI(Processor Device) Jan 28 06:18:47.960242 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 06:18:47.960254 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 06:18:47.960266 kernel: ACPI: Interpreter enabled Jan 28 06:18:47.960278 kernel: ACPI: PM: (supports S0 S5) Jan 28 06:18:47.960290 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 06:18:47.960303 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 06:18:47.960315 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 06:18:47.960327 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 06:18:47.960339 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 06:18:47.962713 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 06:18:47.962927 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 06:18:47.963095 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 06:18:47.963115 kernel: PCI host bridge to bus 0000:00 Jan 28 06:18:47.963311 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 06:18:47.963464 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 06:18:47.965681 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 06:18:47.965843 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 28 06:18:47.966002 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 06:18:47.966155 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 28 06:18:47.966323 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 06:18:47.966618 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 06:18:47.966828 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 28 06:18:47.967006 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 28 06:18:47.967168 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 28 06:18:47.967352 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 28 06:18:47.967513 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 06:18:47.971584 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.971772 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 28 06:18:47.971956 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 06:18:47.972140 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 06:18:47.972319 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 06:18:47.972505 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.972757 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 28 06:18:47.974612 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 06:18:47.974818 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 06:18:47.975005 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 06:18:47.975268 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.975451 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 28 06:18:47.975655 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 06:18:47.975827 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 06:18:47.975998 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 06:18:47.976202 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.976370 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 28 06:18:47.977987 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 06:18:47.978172 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 06:18:47.978360 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 06:18:47.979571 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.979766 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 28 06:18:47.979943 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 06:18:47.980119 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 06:18:47.980317 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 06:18:47.982386 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.982611 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 28 06:18:47.982800 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 06:18:47.982967 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 06:18:47.983131 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 06:18:47.983338 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.983513 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 28 06:18:47.983715 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 06:18:47.983934 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 06:18:47.984108 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 06:18:47.984319 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:18:47.984484 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 28 06:18:47.985388 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 06:18:47.986605 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 06:18:47.986795 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 06:18:47.986997 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 06:18:47.987219 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 06:18:47.987388 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 28 06:18:47.987574 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 28 06:18:47.987748 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 28 06:18:47.987950 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 06:18:47.988128 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 28 06:18:47.988314 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 28 06:18:47.988475 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 28 06:18:47.991740 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 06:18:47.991921 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 06:18:47.992131 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 06:18:47.992314 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 28 06:18:47.992478 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 28 06:18:47.992741 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 06:18:47.992909 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 06:18:47.993092 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 28 06:18:47.993284 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 28 06:18:47.993451 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 06:18:47.997784 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 06:18:47.997985 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 06:18:47.998210 kernel: pci_bus 0000:02: extended config space not accessible Jan 28 06:18:47.998440 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 28 06:18:47.998648 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 28 06:18:47.998830 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 06:18:47.999040 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 28 06:18:47.999234 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 28 06:18:47.999402 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 06:18:48.002639 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 28 06:18:48.002826 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 28 06:18:48.002997 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 06:18:48.003178 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 06:18:48.003367 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 06:18:48.003554 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 06:18:48.003723 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 06:18:48.003890 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 06:18:48.003910 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 06:18:48.003923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 06:18:48.003943 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 06:18:48.003956 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 06:18:48.003968 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 06:18:48.003981 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 06:18:48.003993 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 06:18:48.004005 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 06:18:48.004017 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 06:18:48.004030 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 06:18:48.004042 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 06:18:48.004059 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 06:18:48.004071 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 06:18:48.004092 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 06:18:48.004104 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 06:18:48.004116 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 06:18:48.004128 kernel: iommu: Default domain type: Translated Jan 28 06:18:48.004141 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 06:18:48.004153 kernel: PCI: Using ACPI for IRQ routing Jan 28 06:18:48.004165 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 06:18:48.004203 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 06:18:48.004217 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 28 06:18:48.004381 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 06:18:48.007590 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 06:18:48.007778 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 06:18:48.007799 kernel: vgaarb: loaded Jan 28 06:18:48.007813 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 06:18:48.007832 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 06:18:48.007853 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 06:18:48.007866 kernel: pnp: PnP ACPI init Jan 28 06:18:48.008082 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 06:18:48.008104 kernel: pnp: PnP ACPI: found 5 devices Jan 28 06:18:48.008117 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 06:18:48.008141 kernel: NET: Registered PF_INET protocol family Jan 28 06:18:48.008153 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 06:18:48.008165 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 06:18:48.008177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 06:18:48.008221 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 06:18:48.008234 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 06:18:48.008246 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 06:18:48.008259 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 06:18:48.008271 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 06:18:48.008283 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 06:18:48.008296 kernel: NET: Registered PF_XDP protocol family Jan 28 06:18:48.008463 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 28 06:18:48.008657 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 06:18:48.008863 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 06:18:48.009030 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 06:18:48.009209 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 06:18:48.009377 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 06:18:48.013389 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 06:18:48.013624 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 06:18:48.013858 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 28 06:18:48.014050 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 28 06:18:48.014261 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 28 06:18:48.014428 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 28 06:18:48.015702 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 28 06:18:48.015901 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 28 06:18:48.016069 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 28 06:18:48.016253 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 28 06:18:48.016427 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 06:18:48.017709 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 06:18:48.017890 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 06:18:48.018068 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 06:18:48.018247 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 06:18:48.018410 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 06:18:48.018634 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 06:18:48.018818 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 06:18:48.019002 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 06:18:48.019163 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 06:18:48.019357 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 06:18:48.019546 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 06:18:48.019722 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 06:18:48.019892 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 06:18:48.020066 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 06:18:48.020241 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 06:18:48.020411 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 06:18:48.020621 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 06:18:48.020839 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 06:18:48.021016 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 06:18:48.021178 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 06:18:48.021361 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 06:18:48.021568 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 06:18:48.021733 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 06:18:48.021895 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 06:18:48.022056 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 06:18:48.022231 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 06:18:48.022432 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 06:18:48.022617 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 06:18:48.022805 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 06:18:48.022979 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 06:18:48.023147 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 06:18:48.023325 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 06:18:48.023488 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 06:18:48.023682 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 06:18:48.023835 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 06:18:48.023996 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 06:18:48.024155 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 28 06:18:48.024319 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 06:18:48.024487 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 28 06:18:48.024669 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 06:18:48.024816 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 28 06:18:48.024961 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 06:18:48.025161 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 28 06:18:48.025372 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 28 06:18:48.025558 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 28 06:18:48.025736 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 06:18:48.025904 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 28 06:18:48.026133 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 28 06:18:48.026318 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 06:18:48.026498 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 28 06:18:48.026726 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 28 06:18:48.026926 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 06:18:48.027120 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 28 06:18:48.027300 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 28 06:18:48.027460 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 06:18:48.027687 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 28 06:18:48.027862 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 28 06:18:48.028016 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 06:18:48.028210 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 28 06:18:48.028377 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 28 06:18:48.028576 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 06:18:48.028804 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 28 06:18:48.028963 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 28 06:18:48.029115 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 06:18:48.029136 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 06:18:48.029157 kernel: PCI: CLS 0 bytes, default 64 Jan 28 06:18:48.029170 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 06:18:48.029199 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 28 06:18:48.029215 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 06:18:48.029229 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 28 06:18:48.029242 kernel: Initialise system trusted keyrings Jan 28 06:18:48.029255 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 06:18:48.029267 kernel: Key type asymmetric registered Jan 28 06:18:48.029280 kernel: Asymmetric key parser 'x509' registered Jan 28 06:18:48.029298 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 06:18:48.029311 kernel: io scheduler mq-deadline registered Jan 28 06:18:48.029324 kernel: io scheduler kyber registered Jan 28 06:18:48.029337 kernel: io scheduler bfq registered Jan 28 06:18:48.029502 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 28 06:18:48.029718 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 28 06:18:48.029896 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.030079 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 28 06:18:48.030256 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 28 06:18:48.030455 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.030641 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 28 06:18:48.030834 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 28 06:18:48.031003 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.031203 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 28 06:18:48.031370 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 28 06:18:48.031562 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.031732 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 28 06:18:48.031914 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 28 06:18:48.032113 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.032299 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 28 06:18:48.032473 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 28 06:18:48.032657 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.032821 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 28 06:18:48.032983 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 28 06:18:48.033152 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.033339 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 28 06:18:48.033575 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 28 06:18:48.033753 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:18:48.033775 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 06:18:48.033789 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 06:18:48.033802 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 06:18:48.033815 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 06:18:48.033828 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 06:18:48.033848 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 06:18:48.033861 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 06:18:48.033874 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 06:18:48.033892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 06:18:48.034119 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 28 06:18:48.034300 kernel: rtc_cmos 00:03: registered as rtc0 Jan 28 06:18:48.034456 kernel: rtc_cmos 00:03: setting system clock to 2026-01-28T06:18:47 UTC (1769581127) Jan 28 06:18:48.034645 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 28 06:18:48.034679 kernel: intel_pstate: CPU model not supported Jan 28 06:18:48.034705 kernel: NET: Registered PF_INET6 protocol family Jan 28 06:18:48.034718 kernel: Segment Routing with IPv6 Jan 28 06:18:48.034730 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 06:18:48.034752 kernel: NET: Registered PF_PACKET protocol family Jan 28 06:18:48.034765 kernel: Key type dns_resolver registered Jan 28 06:18:48.034778 kernel: IPI shorthand broadcast: enabled Jan 28 06:18:48.034791 kernel: sched_clock: Marking stable (3472004208, 218198181)->(3949402816, -259200427) Jan 28 06:18:48.034803 kernel: registered taskstats version 1 Jan 28 06:18:48.034822 kernel: Loading compiled-in X.509 certificates Jan 28 06:18:48.034835 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 31c1e06975b690596c927b070a4cb9e218a3417b' Jan 28 06:18:48.034847 kernel: Demotion targets for Node 0: null Jan 28 06:18:48.034860 kernel: Key type .fscrypt registered Jan 28 06:18:48.034873 kernel: Key type fscrypt-provisioning registered Jan 28 06:18:48.034885 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 06:18:48.034898 kernel: ima: Allocated hash algorithm: sha1 Jan 28 06:18:48.034911 kernel: ima: No architecture policies found Jan 28 06:18:48.034924 kernel: clk: Disabling unused clocks Jan 28 06:18:48.034953 kernel: Warning: unable to open an initial console. Jan 28 06:18:48.034966 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 28 06:18:48.034978 kernel: Write protecting the kernel read-only data: 40960k Jan 28 06:18:48.034991 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 28 06:18:48.035016 kernel: Run /init as init process Jan 28 06:18:48.035028 kernel: with arguments: Jan 28 06:18:48.035041 kernel: /init Jan 28 06:18:48.035053 kernel: with environment: Jan 28 06:18:48.035072 kernel: HOME=/ Jan 28 06:18:48.035089 kernel: TERM=linux Jan 28 06:18:48.035104 systemd[1]: Successfully made /usr/ read-only. Jan 28 06:18:48.035152 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 06:18:48.035167 systemd[1]: Detected virtualization kvm. Jan 28 06:18:48.035180 systemd[1]: Detected architecture x86-64. Jan 28 06:18:48.035203 systemd[1]: Running in initrd. Jan 28 06:18:48.035217 systemd[1]: No hostname configured, using default hostname. Jan 28 06:18:48.035237 systemd[1]: Hostname set to . Jan 28 06:18:48.035251 systemd[1]: Initializing machine ID from VM UUID. Jan 28 06:18:48.035264 systemd[1]: Queued start job for default target initrd.target. Jan 28 06:18:48.035277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:18:48.035292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:18:48.035306 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 06:18:48.035320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 06:18:48.035333 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 06:18:48.035353 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 06:18:48.035368 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 06:18:48.035382 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 06:18:48.035395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:18:48.035409 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:18:48.035422 systemd[1]: Reached target paths.target - Path Units. Jan 28 06:18:48.035436 systemd[1]: Reached target slices.target - Slice Units. Jan 28 06:18:48.035454 systemd[1]: Reached target swap.target - Swaps. Jan 28 06:18:48.035468 systemd[1]: Reached target timers.target - Timer Units. Jan 28 06:18:48.035481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 06:18:48.035495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 06:18:48.035509 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 06:18:48.035548 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 06:18:48.035563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:18:48.035577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 06:18:48.035606 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:18:48.035628 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 06:18:48.035641 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 06:18:48.035655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 06:18:48.035675 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 06:18:48.035689 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 06:18:48.035703 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 06:18:48.035716 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 06:18:48.035730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 06:18:48.035753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:18:48.035767 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 06:18:48.035781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:18:48.035843 systemd-journald[210]: Collecting audit messages is disabled. Jan 28 06:18:48.035879 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 06:18:48.035894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 06:18:48.035920 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 06:18:48.035933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:18:48.035950 kernel: Bridge firewalling registered Jan 28 06:18:48.035986 systemd-journald[210]: Journal started Jan 28 06:18:48.036009 systemd-journald[210]: Runtime Journal (/run/log/journal/f6b4a1dee90e422cb028046dcf68a91a) is 4.7M, max 37.8M, 33.1M free. Jan 28 06:18:47.982484 systemd-modules-load[211]: Inserted module 'overlay' Jan 28 06:18:48.081470 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 06:18:48.015004 systemd-modules-load[211]: Inserted module 'br_netfilter' Jan 28 06:18:48.083795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 06:18:48.084901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:18:48.090271 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 06:18:48.093717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:18:48.097489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 06:18:48.101702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 06:18:48.122500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:18:48.123817 systemd-tmpfiles[230]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 06:18:48.129951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:18:48.134622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 06:18:48.139050 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 06:18:48.140026 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:18:48.143757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 06:18:48.173961 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 06:18:48.203732 systemd-resolved[249]: Positive Trust Anchors: Jan 28 06:18:48.204823 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 06:18:48.204867 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 06:18:48.209502 systemd-resolved[249]: Defaulting to hostname 'linux'. Jan 28 06:18:48.214482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 06:18:48.216335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:18:48.293552 kernel: SCSI subsystem initialized Jan 28 06:18:48.304563 kernel: Loading iSCSI transport class v2.0-870. Jan 28 06:18:48.318572 kernel: iscsi: registered transport (tcp) Jan 28 06:18:48.344011 kernel: iscsi: registered transport (qla4xxx) Jan 28 06:18:48.344114 kernel: QLogic iSCSI HBA Driver Jan 28 06:18:48.370079 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 06:18:48.389609 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:18:48.391292 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 06:18:48.456378 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 06:18:48.459144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 06:18:48.520573 kernel: raid6: sse2x4 gen() 13708 MB/s Jan 28 06:18:48.538604 kernel: raid6: sse2x2 gen() 9499 MB/s Jan 28 06:18:48.557026 kernel: raid6: sse2x1 gen() 9316 MB/s Jan 28 06:18:48.557143 kernel: raid6: using algorithm sse2x4 gen() 13708 MB/s Jan 28 06:18:48.576211 kernel: raid6: .... xor() 7734 MB/s, rmw enabled Jan 28 06:18:48.576310 kernel: raid6: using ssse3x2 recovery algorithm Jan 28 06:18:48.601583 kernel: xor: automatically using best checksumming function avx Jan 28 06:18:48.794590 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 06:18:48.804690 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 06:18:48.808856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:18:48.844883 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 28 06:18:48.854078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:18:48.858731 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 06:18:48.892154 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 28 06:18:48.926934 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 06:18:48.930230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 06:18:49.061209 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:18:49.065152 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 06:18:49.205546 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 28 06:18:49.213560 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 06:18:49.220548 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 28 06:18:49.224572 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 28 06:18:49.250935 kernel: libata version 3.00 loaded. Jan 28 06:18:49.255431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 06:18:49.257182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:18:49.258370 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:18:49.263169 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 06:18:49.264701 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 06:18:49.266662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:18:49.270790 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 06:18:49.282032 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 06:18:49.282337 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 06:18:49.282577 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 06:18:49.295031 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 06:18:49.295140 kernel: GPT:17805311 != 125829119 Jan 28 06:18:49.295171 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 06:18:49.295191 kernel: GPT:17805311 != 125829119 Jan 28 06:18:49.295218 kernel: AES CTR mode by8 optimization enabled Jan 28 06:18:49.295237 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 06:18:49.295253 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 06:18:49.303579 kernel: scsi host0: ahci Jan 28 06:18:49.306573 kernel: scsi host1: ahci Jan 28 06:18:49.310582 kernel: scsi host2: ahci Jan 28 06:18:49.317789 kernel: scsi host3: ahci Jan 28 06:18:49.327563 kernel: scsi host4: ahci Jan 28 06:18:49.330546 kernel: ACPI: bus type USB registered Jan 28 06:18:49.360570 kernel: scsi host5: ahci Jan 28 06:18:49.361013 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Jan 28 06:18:49.361036 kernel: usbcore: registered new interface driver usbfs Jan 28 06:18:49.361052 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Jan 28 06:18:49.361082 kernel: usbcore: registered new interface driver hub Jan 28 06:18:49.361098 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Jan 28 06:18:49.361165 kernel: usbcore: registered new device driver usb Jan 28 06:18:49.361205 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Jan 28 06:18:49.361224 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Jan 28 06:18:49.361240 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Jan 28 06:18:49.431171 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 06:18:49.474728 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 06:18:49.477077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:18:49.490909 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 06:18:49.503573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 06:18:49.523703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 06:18:49.525925 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 06:18:49.549571 disk-uuid[608]: Primary Header is updated. Jan 28 06:18:49.549571 disk-uuid[608]: Secondary Entries is updated. Jan 28 06:18:49.549571 disk-uuid[608]: Secondary Header is updated. Jan 28 06:18:49.555556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 06:18:49.565606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 06:18:49.675592 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 06:18:49.675732 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 28 06:18:49.678343 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 06:18:49.681553 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 06:18:49.687884 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 06:18:49.687947 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 06:18:49.719575 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 06:18:49.723556 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 28 06:18:49.729584 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 06:18:49.736562 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 06:18:49.739541 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 28 06:18:49.742566 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 28 06:18:49.747108 kernel: hub 1-0:1.0: USB hub found Jan 28 06:18:49.747385 kernel: hub 1-0:1.0: 4 ports detected Jan 28 06:18:49.750545 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 06:18:49.753143 kernel: hub 2-0:1.0: USB hub found Jan 28 06:18:49.753466 kernel: hub 2-0:1.0: 4 ports detected Jan 28 06:18:49.788449 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 06:18:49.790778 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 06:18:49.791586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:18:49.793282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 06:18:49.796304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 06:18:49.825368 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 06:18:49.988682 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 06:18:50.129593 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 06:18:50.136728 kernel: usbcore: registered new interface driver usbhid Jan 28 06:18:50.136820 kernel: usbhid: USB HID core driver Jan 28 06:18:50.145025 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 28 06:18:50.145064 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 28 06:18:50.568021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 06:18:50.568979 disk-uuid[609]: The operation has completed successfully. Jan 28 06:18:50.625711 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 06:18:50.626757 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 06:18:50.682118 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 06:18:50.735743 sh[638]: Success Jan 28 06:18:50.762845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 06:18:50.762957 kernel: device-mapper: uevent: version 1.0.3 Jan 28 06:18:50.768540 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 06:18:50.778553 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jan 28 06:18:50.830962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 06:18:50.837713 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 06:18:50.855686 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 06:18:50.869543 kernel: BTRFS: device fsid 4389fb68-1fd1-4240-9a3a-21ed56363b72 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (650) Jan 28 06:18:50.870591 kernel: BTRFS info (device dm-0): first mount of filesystem 4389fb68-1fd1-4240-9a3a-21ed56363b72 Jan 28 06:18:50.872653 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:18:50.882921 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 06:18:50.882959 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 06:18:50.886938 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 06:18:50.889257 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 06:18:50.890948 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 06:18:50.893743 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 06:18:50.896690 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 06:18:50.935987 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (683) Jan 28 06:18:50.936055 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 06:18:50.938108 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:18:50.944292 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:18:50.944359 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:18:50.951617 kernel: BTRFS info (device vda6): last unmount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 06:18:50.953505 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 06:18:50.956772 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 06:18:51.085954 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 06:18:51.089757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 06:18:51.160043 systemd-networkd[822]: lo: Link UP Jan 28 06:18:51.161023 systemd-networkd[822]: lo: Gained carrier Jan 28 06:18:51.165800 systemd-networkd[822]: Enumeration completed Jan 28 06:18:51.167563 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 06:18:51.168350 systemd-networkd[822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 06:18:51.168356 systemd-networkd[822]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 06:18:51.170186 systemd[1]: Reached target network.target - Network. Jan 28 06:18:51.171755 systemd-networkd[822]: eth0: Link UP Jan 28 06:18:51.172073 systemd-networkd[822]: eth0: Gained carrier Jan 28 06:18:51.172088 systemd-networkd[822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 06:18:51.217686 systemd-networkd[822]: eth0: DHCPv4 address 10.230.78.222/30, gateway 10.230.78.221 acquired from 10.230.78.221 Jan 28 06:18:51.264410 ignition[736]: Ignition 2.22.0 Jan 28 06:18:51.264435 ignition[736]: Stage: fetch-offline Jan 28 06:18:51.264511 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:18:51.267969 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 06:18:51.264551 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:18:51.264717 ignition[736]: parsed url from cmdline: "" Jan 28 06:18:51.264724 ignition[736]: no config URL provided Jan 28 06:18:51.264733 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 06:18:51.264749 ignition[736]: no config at "/usr/lib/ignition/user.ign" Jan 28 06:18:51.273692 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 06:18:51.264758 ignition[736]: failed to fetch config: resource requires networking Jan 28 06:18:51.265043 ignition[736]: Ignition finished successfully Jan 28 06:18:51.335671 ignition[831]: Ignition 2.22.0 Jan 28 06:18:51.335708 ignition[831]: Stage: fetch Jan 28 06:18:51.335939 ignition[831]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:18:51.335959 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:18:51.336094 ignition[831]: parsed url from cmdline: "" Jan 28 06:18:51.336101 ignition[831]: no config URL provided Jan 28 06:18:51.336123 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 06:18:51.336141 ignition[831]: no config at "/usr/lib/ignition/user.ign" Jan 28 06:18:51.336266 ignition[831]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 28 06:18:51.336762 ignition[831]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 28 06:18:51.336822 ignition[831]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 28 06:18:51.354419 ignition[831]: GET result: OK Jan 28 06:18:51.355302 ignition[831]: parsing config with SHA512: bb483270c156186de495d09f8756b208fb61cdf4a8a7d70df472f9434478143f280ae807452021cbbdc42cb1ca8b80b0dc7b59e6b5a8c955af3856327b1cdd63 Jan 28 06:18:51.361782 unknown[831]: fetched base config from "system" Jan 28 06:18:51.362197 ignition[831]: fetch: fetch complete Jan 28 06:18:51.361797 unknown[831]: fetched base config from "system" Jan 28 06:18:51.362205 ignition[831]: fetch: fetch passed Jan 28 06:18:51.361806 unknown[831]: fetched user config from "openstack" Jan 28 06:18:51.363056 ignition[831]: Ignition finished successfully Jan 28 06:18:51.367706 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 06:18:51.369756 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 06:18:51.450504 ignition[837]: Ignition 2.22.0 Jan 28 06:18:51.450545 ignition[837]: Stage: kargs Jan 28 06:18:51.450751 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:18:51.450769 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:18:51.451717 ignition[837]: kargs: kargs passed Jan 28 06:18:51.455722 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 06:18:51.451815 ignition[837]: Ignition finished successfully Jan 28 06:18:51.458632 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 06:18:51.537746 ignition[843]: Ignition 2.22.0 Jan 28 06:18:51.537772 ignition[843]: Stage: disks Jan 28 06:18:51.537971 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:18:51.537998 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:18:51.542826 ignition[843]: disks: disks passed Jan 28 06:18:51.542925 ignition[843]: Ignition finished successfully Jan 28 06:18:51.545622 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 06:18:51.547412 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 06:18:51.549150 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 06:18:51.550792 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 06:18:51.551591 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 06:18:51.553181 systemd[1]: Reached target basic.target - Basic System. Jan 28 06:18:51.556220 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 06:18:51.589716 systemd-fsck[851]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 28 06:18:51.592817 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 06:18:51.596285 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 06:18:51.728577 kernel: EXT4-fs (vda9): mounted filesystem 0c920277-6cf2-4276-8e4c-1a9645be49e7 r/w with ordered data mode. Quota mode: none. Jan 28 06:18:51.727971 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 06:18:51.729546 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 06:18:51.732076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 06:18:51.734008 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 06:18:51.736323 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 06:18:51.739262 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 28 06:18:51.740949 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 06:18:51.741001 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 06:18:51.755166 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 06:18:51.759926 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 06:18:51.780340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jan 28 06:18:51.780377 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 06:18:51.780396 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:18:51.780418 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:18:51.780435 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:18:51.783801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 06:18:51.853561 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:18:51.878554 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 06:18:51.885373 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Jan 28 06:18:51.891433 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 06:18:51.897412 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 06:18:52.011592 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 06:18:52.014323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 06:18:52.016010 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 06:18:52.036878 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 06:18:52.040891 kernel: BTRFS info (device vda6): last unmount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 06:18:52.063983 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 06:18:52.086213 ignition[977]: INFO : Ignition 2.22.0 Jan 28 06:18:52.088614 ignition[977]: INFO : Stage: mount Jan 28 06:18:52.088614 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:18:52.088614 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:18:52.091331 ignition[977]: INFO : mount: mount passed Jan 28 06:18:52.091331 ignition[977]: INFO : Ignition finished successfully Jan 28 06:18:52.093005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 06:18:52.578734 systemd-networkd[822]: eth0: Gained IPv6LL Jan 28 06:18:52.884557 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:18:54.086571 systemd-networkd[822]: eth0: Ignoring DHCPv6 address 2a02:1348:179:93b7:24:19ff:fee6:4ede/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:93b7:24:19ff:fee6:4ede/64 assigned by NDisc. Jan 28 06:18:54.086587 systemd-networkd[822]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 06:18:54.893546 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:18:58.902569 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:18:58.911935 coreos-metadata[861]: Jan 28 06:18:58.911 WARN failed to locate config-drive, using the metadata service API instead Jan 28 06:18:58.932642 coreos-metadata[861]: Jan 28 06:18:58.932 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 06:18:58.948924 coreos-metadata[861]: Jan 28 06:18:58.948 INFO Fetch successful Jan 28 06:18:58.949768 coreos-metadata[861]: Jan 28 06:18:58.949 INFO wrote hostname srv-4e3e3.gb1.brightbox.com to /sysroot/etc/hostname Jan 28 06:18:58.951673 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 28 06:18:58.951900 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 28 06:18:58.957122 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 06:18:58.980805 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 06:18:59.003553 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Jan 28 06:18:59.008783 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 06:18:59.008822 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:18:59.016176 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:18:59.016222 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:18:59.019046 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 06:18:59.058552 ignition[1012]: INFO : Ignition 2.22.0 Jan 28 06:18:59.058552 ignition[1012]: INFO : Stage: files Jan 28 06:18:59.061545 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:18:59.061545 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:18:59.063590 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Jan 28 06:18:59.064747 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 06:18:59.064747 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 06:18:59.072575 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 06:18:59.072575 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 06:18:59.074686 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 06:18:59.073639 unknown[1012]: wrote ssh authorized keys file for user: core Jan 28 06:18:59.076690 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 06:18:59.076690 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 06:18:59.281241 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 06:18:59.643915 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 06:18:59.643915 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 06:18:59.643915 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 28 06:18:59.939262 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 06:19:00.200250 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 06:19:00.202112 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 06:19:00.210891 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 06:19:00.210891 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 06:19:00.210891 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:19:00.210891 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:19:00.210891 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:19:00.210891 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 28 06:19:00.457642 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 06:19:02.023636 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:19:02.023636 ignition[1012]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 28 06:19:02.027129 ignition[1012]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 06:19:02.029361 ignition[1012]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 06:19:02.029361 ignition[1012]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 28 06:19:02.031582 ignition[1012]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 28 06:19:02.031582 ignition[1012]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 06:19:02.031582 ignition[1012]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 06:19:02.031582 ignition[1012]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 06:19:02.031582 ignition[1012]: INFO : files: files passed Jan 28 06:19:02.031582 ignition[1012]: INFO : Ignition finished successfully Jan 28 06:19:02.033391 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 06:19:02.037709 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 06:19:02.040956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 06:19:02.070268 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 06:19:02.070481 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 06:19:02.074300 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:19:02.074300 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:19:02.077999 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:19:02.080856 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 06:19:02.082195 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 06:19:02.084735 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 06:19:02.138282 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 06:19:02.138471 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 06:19:02.140195 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 06:19:02.141477 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 06:19:02.143236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 06:19:02.144698 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 06:19:02.177834 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 06:19:02.180636 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 06:19:02.208061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:19:02.209076 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:19:02.210652 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 06:19:02.212158 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 06:19:02.212424 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 06:19:02.213974 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 06:19:02.214927 systemd[1]: Stopped target basic.target - Basic System. Jan 28 06:19:02.216221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 06:19:02.217879 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 06:19:02.219298 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 06:19:02.220914 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 06:19:02.222396 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 06:19:02.224021 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 06:19:02.225459 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 06:19:02.226879 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 06:19:02.228533 systemd[1]: Stopped target swap.target - Swaps. Jan 28 06:19:02.229794 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 06:19:02.230037 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 06:19:02.231636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:19:02.232678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:19:02.234026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 06:19:02.234375 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:19:02.235590 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 06:19:02.235837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 06:19:02.243636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 06:19:02.243815 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 06:19:02.245469 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 06:19:02.245722 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 06:19:02.248231 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 06:19:02.250326 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 06:19:02.250643 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:19:02.261744 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 06:19:02.262896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 06:19:02.263186 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:19:02.269387 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 06:19:02.269683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 06:19:02.285351 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 06:19:02.285602 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 06:19:02.308458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 06:19:02.317116 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 06:19:02.317293 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 06:19:02.330987 ignition[1066]: INFO : Ignition 2.22.0 Jan 28 06:19:02.332651 ignition[1066]: INFO : Stage: umount Jan 28 06:19:02.332651 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:19:02.332651 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:19:02.336574 ignition[1066]: INFO : umount: umount passed Jan 28 06:19:02.338385 ignition[1066]: INFO : Ignition finished successfully Jan 28 06:19:02.339677 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 06:19:02.339834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 06:19:02.341077 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 06:19:02.341212 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 06:19:02.342060 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 06:19:02.342129 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 06:19:02.343354 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 06:19:02.343436 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 06:19:02.344838 systemd[1]: Stopped target network.target - Network. Jan 28 06:19:02.346192 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 06:19:02.346293 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 06:19:02.347620 systemd[1]: Stopped target paths.target - Path Units. Jan 28 06:19:02.348902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 06:19:02.349284 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:19:02.350327 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 06:19:02.351769 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 06:19:02.353160 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 06:19:02.353251 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 06:19:02.354451 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 06:19:02.354515 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 06:19:02.355876 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 06:19:02.355975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 06:19:02.357468 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 06:19:02.357602 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 06:19:02.358857 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 06:19:02.359018 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 06:19:02.360604 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 06:19:02.362403 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 06:19:02.364917 systemd-networkd[822]: eth0: DHCPv6 lease lost Jan 28 06:19:02.368800 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 06:19:02.369035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 06:19:02.372271 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 28 06:19:02.372633 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 06:19:02.372873 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 06:19:02.378889 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 28 06:19:02.379951 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 06:19:02.381490 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 06:19:02.381693 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:19:02.384775 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 06:19:02.386336 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 06:19:02.386422 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 06:19:02.388067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 06:19:02.388138 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:19:02.391337 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 06:19:02.391421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 06:19:02.394345 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 06:19:02.394418 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:19:02.395705 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:19:02.398764 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 06:19:02.398881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 28 06:19:02.416696 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 06:19:02.419013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:19:02.421390 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 06:19:02.422682 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 06:19:02.424709 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 06:19:02.424791 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 06:19:02.426670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 06:19:02.426731 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:19:02.427908 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 06:19:02.427986 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 06:19:02.430098 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 06:19:02.430197 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 06:19:02.431463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 06:19:02.431572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 06:19:02.434175 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 06:19:02.436584 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 06:19:02.436662 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:19:02.439726 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 06:19:02.439814 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:19:02.447259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 06:19:02.447354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:19:02.451096 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 28 06:19:02.451182 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 28 06:19:02.451258 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 06:19:02.462202 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 06:19:02.462396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 06:19:02.464334 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 06:19:02.466796 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 06:19:02.488659 systemd[1]: Switching root. Jan 28 06:19:02.528104 systemd-journald[210]: Journal stopped Jan 28 06:19:04.207126 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Jan 28 06:19:04.207250 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 06:19:04.207310 kernel: SELinux: policy capability open_perms=1 Jan 28 06:19:04.207333 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 06:19:04.207351 kernel: SELinux: policy capability always_check_network=0 Jan 28 06:19:04.207377 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 06:19:04.207397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 06:19:04.207452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 06:19:04.207481 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 06:19:04.207507 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 06:19:04.207544 kernel: audit: type=1403 audit(1769581142.938:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 06:19:04.207565 systemd[1]: Successfully loaded SELinux policy in 82.463ms. Jan 28 06:19:04.207612 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.187ms. Jan 28 06:19:04.207637 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 06:19:04.207666 systemd[1]: Detected virtualization kvm. Jan 28 06:19:04.207709 systemd[1]: Detected architecture x86-64. Jan 28 06:19:04.207730 systemd[1]: Detected first boot. Jan 28 06:19:04.207749 systemd[1]: Hostname set to . Jan 28 06:19:04.207777 systemd[1]: Initializing machine ID from VM UUID. Jan 28 06:19:04.207798 zram_generator::config[1111]: No configuration found. Jan 28 06:19:04.207818 kernel: Guest personality initialized and is inactive Jan 28 06:19:04.207850 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 06:19:04.207878 kernel: Initialized host personality Jan 28 06:19:04.207897 kernel: NET: Registered PF_VSOCK protocol family Jan 28 06:19:04.207943 systemd[1]: Populated /etc with preset unit settings. Jan 28 06:19:04.207966 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 28 06:19:04.207995 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 06:19:04.208036 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 06:19:04.208058 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 06:19:04.208077 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 06:19:04.208096 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 06:19:04.208116 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 06:19:04.208162 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 06:19:04.208206 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 06:19:04.208241 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 06:19:04.208264 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 06:19:04.208292 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 06:19:04.208312 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:19:04.210564 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:19:04.210593 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 06:19:04.210629 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 06:19:04.210651 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 06:19:04.210681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 06:19:04.210723 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 06:19:04.210762 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:19:04.210784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:19:04.210803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 06:19:04.210832 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 06:19:04.210854 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 06:19:04.210874 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 06:19:04.210892 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:19:04.210923 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 06:19:04.210944 systemd[1]: Reached target slices.target - Slice Units. Jan 28 06:19:04.210978 systemd[1]: Reached target swap.target - Swaps. Jan 28 06:19:04.211000 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 06:19:04.211020 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 06:19:04.211040 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 06:19:04.211058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:19:04.211077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 06:19:04.211096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:19:04.211128 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 06:19:04.211148 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 06:19:04.211180 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 06:19:04.211202 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 06:19:04.211221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:04.211240 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 06:19:04.211260 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 06:19:04.211287 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 06:19:04.211316 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 06:19:04.211337 systemd[1]: Reached target machines.target - Containers. Jan 28 06:19:04.211356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 06:19:04.211390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:19:04.211411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 06:19:04.211430 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 06:19:04.211448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 06:19:04.211476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 06:19:04.211497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 06:19:04.211542 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 06:19:04.211564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 06:19:04.211638 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 06:19:04.211661 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 06:19:04.211680 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 06:19:04.211699 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 06:19:04.211718 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 06:19:04.211738 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:19:04.211757 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 06:19:04.211776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 06:19:04.211795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 06:19:04.211839 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 06:19:04.211886 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 06:19:04.211951 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 06:19:04.211974 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 06:19:04.211993 systemd[1]: Stopped verity-setup.service. Jan 28 06:19:04.212012 kernel: loop: module loaded Jan 28 06:19:04.212031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:04.212051 kernel: fuse: init (API version 7.41) Jan 28 06:19:04.212069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 06:19:04.212105 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 06:19:04.212126 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 06:19:04.212146 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 06:19:04.212165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 06:19:04.212193 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 06:19:04.212221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:19:04.212244 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 06:19:04.212263 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 06:19:04.212282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 06:19:04.212320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 06:19:04.212350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 06:19:04.212372 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 06:19:04.212392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 06:19:04.212411 kernel: ACPI: bus type drm_connector registered Jan 28 06:19:04.212430 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 06:19:04.212449 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 06:19:04.212468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 06:19:04.212500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 06:19:04.214607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 06:19:04.214648 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 06:19:04.214707 systemd-journald[1198]: Collecting audit messages is disabled. Jan 28 06:19:04.214771 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 06:19:04.214795 systemd-journald[1198]: Journal started Jan 28 06:19:04.214836 systemd-journald[1198]: Runtime Journal (/run/log/journal/f6b4a1dee90e422cb028046dcf68a91a) is 4.7M, max 37.8M, 33.1M free. Jan 28 06:19:03.774328 systemd[1]: Queued start job for default target multi-user.target. Jan 28 06:19:03.784766 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 06:19:03.785529 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 06:19:04.221568 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 06:19:04.234917 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 06:19:04.235001 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 06:19:04.243542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 06:19:04.251626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 06:19:04.251691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:19:04.259588 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 06:19:04.265023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 06:19:04.267547 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 06:19:04.270553 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 06:19:04.277554 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 06:19:04.281549 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 06:19:04.285528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 06:19:04.290490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:19:04.291641 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 06:19:04.292578 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 06:19:04.305761 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 06:19:04.320361 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 06:19:04.327873 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 06:19:04.336392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:19:04.341147 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 06:19:04.348311 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 06:19:04.362764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 06:19:04.365610 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 06:19:04.372902 kernel: loop0: detected capacity change from 0 to 229808 Jan 28 06:19:04.374918 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 06:19:04.412001 systemd-journald[1198]: Time spent on flushing to /var/log/journal/f6b4a1dee90e422cb028046dcf68a91a is 42.014ms for 1171 entries. Jan 28 06:19:04.412001 systemd-journald[1198]: System Journal (/var/log/journal/f6b4a1dee90e422cb028046dcf68a91a) is 8M, max 584.8M, 576.8M free. Jan 28 06:19:04.471730 systemd-journald[1198]: Received client request to flush runtime journal. Jan 28 06:19:04.471799 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 06:19:04.477877 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 06:19:04.480601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:19:04.495468 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 06:19:04.508071 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 06:19:04.518853 kernel: loop1: detected capacity change from 0 to 128560 Jan 28 06:19:04.516726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 06:19:04.575267 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 28 06:19:04.575296 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 28 06:19:04.584545 kernel: loop2: detected capacity change from 0 to 110984 Jan 28 06:19:04.588659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:19:04.628614 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:19:04.694314 kernel: loop3: detected capacity change from 0 to 8 Jan 28 06:19:04.728009 kernel: loop4: detected capacity change from 0 to 229808 Jan 28 06:19:04.751349 kernel: loop5: detected capacity change from 0 to 128560 Jan 28 06:19:04.776498 kernel: loop6: detected capacity change from 0 to 110984 Jan 28 06:19:04.804478 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 06:19:04.817969 kernel: loop7: detected capacity change from 0 to 8 Jan 28 06:19:04.825286 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 28 06:19:04.826225 (sd-merge)[1275]: Merged extensions into '/usr'. Jan 28 06:19:04.843667 systemd[1]: Reload requested from client PID 1226 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 06:19:04.843702 systemd[1]: Reloading... Jan 28 06:19:04.935559 zram_generator::config[1297]: No configuration found. Jan 28 06:19:05.302568 ldconfig[1219]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 06:19:05.354057 systemd[1]: Reloading finished in 509 ms. Jan 28 06:19:05.382812 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 06:19:05.384077 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 06:19:05.404758 systemd[1]: Starting ensure-sysext.service... Jan 28 06:19:05.408784 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 06:19:05.473295 systemd[1]: Reload requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Jan 28 06:19:05.473326 systemd[1]: Reloading... Jan 28 06:19:05.509372 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 06:19:05.510220 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 06:19:05.510894 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 06:19:05.511410 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 06:19:05.513088 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 06:19:05.513972 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jan 28 06:19:05.514079 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jan 28 06:19:05.521761 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 06:19:05.522703 systemd-tmpfiles[1358]: Skipping /boot Jan 28 06:19:05.562459 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 06:19:05.563751 systemd-tmpfiles[1358]: Skipping /boot Jan 28 06:19:05.630562 zram_generator::config[1382]: No configuration found. Jan 28 06:19:05.914414 systemd[1]: Reloading finished in 440 ms. Jan 28 06:19:05.938578 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 06:19:05.955086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:19:05.965224 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 06:19:05.969847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 06:19:05.982694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 06:19:05.987970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 06:19:05.992122 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:19:05.996129 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 06:19:06.002932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:06.003209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:19:06.008238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 06:19:06.016803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 06:19:06.031011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 06:19:06.031949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:19:06.032110 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:19:06.032265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:06.038215 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:06.038498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:19:06.039050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:19:06.039183 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:19:06.043771 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 06:19:06.044474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:06.054616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:06.055027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:19:06.057744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 06:19:06.062019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:19:06.062235 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:19:06.062433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:19:06.066554 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 06:19:06.072878 systemd[1]: Finished ensure-sysext.service. Jan 28 06:19:06.100089 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 06:19:06.106103 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 06:19:06.108979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 06:19:06.120295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 06:19:06.120644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 06:19:06.122814 systemd-udevd[1448]: Using default interface naming scheme 'v255'. Jan 28 06:19:06.128392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 06:19:06.133708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 06:19:06.137044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 06:19:06.138294 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 06:19:06.138613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 06:19:06.140963 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 06:19:06.151086 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 06:19:06.151723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 06:19:06.158146 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 06:19:06.187817 augenrules[1485]: No rules Jan 28 06:19:06.189610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:19:06.192279 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 06:19:06.193149 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 06:19:06.203048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 06:19:06.216816 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 06:19:06.226418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 06:19:06.227797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 06:19:06.533097 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 06:19:06.541424 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 06:19:06.569887 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 06:19:06.596998 systemd-resolved[1446]: Positive Trust Anchors: Jan 28 06:19:06.597018 systemd-resolved[1446]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 06:19:06.597073 systemd-resolved[1446]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 06:19:06.603085 systemd-networkd[1494]: lo: Link UP Jan 28 06:19:06.603097 systemd-networkd[1494]: lo: Gained carrier Jan 28 06:19:06.610680 systemd-networkd[1494]: Enumeration completed Jan 28 06:19:06.610810 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 06:19:06.612124 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 06:19:06.612137 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 06:19:06.617213 systemd-resolved[1446]: Using system hostname 'srv-4e3e3.gb1.brightbox.com'. Jan 28 06:19:06.617855 systemd-networkd[1494]: eth0: Link UP Jan 28 06:19:06.618103 systemd-networkd[1494]: eth0: Gained carrier Jan 28 06:19:06.618132 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 06:19:06.619987 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 06:19:06.624652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 06:19:06.628831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 06:19:06.630276 systemd[1]: Reached target network.target - Network. Jan 28 06:19:06.630957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:19:06.632614 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 06:19:06.633427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 06:19:06.634640 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 06:19:06.635898 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 06:19:06.638061 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 06:19:06.639159 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 06:19:06.640901 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 06:19:06.641665 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 06:19:06.641706 systemd[1]: Reached target paths.target - Path Units. Jan 28 06:19:06.642595 systemd[1]: Reached target timers.target - Timer Units. Jan 28 06:19:06.645277 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 06:19:06.650712 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 06:19:06.652614 systemd-networkd[1494]: eth0: DHCPv4 address 10.230.78.222/30, gateway 10.230.78.221 acquired from 10.230.78.221 Jan 28 06:19:06.653575 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 28 06:19:06.656933 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 06:19:06.658887 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 06:19:06.659855 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 06:19:06.668623 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 06:19:06.671188 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 06:19:06.674964 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 06:19:06.696922 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 06:19:06.698680 systemd[1]: Reached target basic.target - Basic System. Jan 28 06:19:06.699389 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 06:19:06.699438 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 06:19:06.701992 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 06:19:06.705974 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 06:19:06.709493 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 06:19:06.713839 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 06:19:06.717796 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 06:19:06.727740 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 06:19:06.729619 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 06:19:06.734504 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 06:19:06.745663 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 06:19:06.750168 jq[1541]: false Jan 28 06:19:06.753816 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 06:19:06.761407 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 06:19:06.765845 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 06:19:06.769592 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:06.780907 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 06:19:06.785821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 06:19:06.786743 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 06:19:06.788685 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 06:19:06.797336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 06:19:06.801463 extend-filesystems[1542]: Found /dev/vda6 Jan 28 06:19:06.801679 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 06:19:06.812661 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 06:19:06.814361 extend-filesystems[1542]: Found /dev/vda9 Jan 28 06:19:06.813901 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 06:19:06.814783 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 06:19:06.830618 jq[1555]: true Jan 28 06:19:06.835312 update_engine[1554]: I20260128 06:19:06.835220 1554 main.cc:92] Flatcar Update Engine starting Jan 28 06:19:06.838203 extend-filesystems[1542]: Checking size of /dev/vda9 Jan 28 06:19:06.842388 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing passwd entry cache Jan 28 06:19:06.844182 oslogin_cache_refresh[1543]: Refreshing passwd entry cache Jan 28 06:19:06.844198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 06:19:06.864819 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 06:19:06.872240 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 06:19:06.873593 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 06:19:06.885130 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting users, quitting Jan 28 06:19:06.885130 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 06:19:06.885130 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing group entry cache Jan 28 06:19:06.884202 oslogin_cache_refresh[1543]: Failure getting users, quitting Jan 28 06:19:06.884228 oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 06:19:06.884321 oslogin_cache_refresh[1543]: Refreshing group entry cache Jan 28 06:19:06.887908 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting groups, quitting Jan 28 06:19:06.887908 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 06:19:06.886213 oslogin_cache_refresh[1543]: Failure getting groups, quitting Jan 28 06:19:06.886229 oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 06:19:06.888123 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 06:19:06.889622 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 06:19:06.901593 extend-filesystems[1542]: Resized partition /dev/vda9 Jan 28 06:19:06.908243 dbus-daemon[1539]: [system] SELinux support is enabled Jan 28 06:19:06.912354 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 06:19:06.932538 dbus-daemon[1539]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1494 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 06:19:06.940007 extend-filesystems[1583]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 06:19:06.939719 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 06:19:06.939800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 06:19:06.953916 update_engine[1554]: I20260128 06:19:06.952973 1554 update_check_scheduler.cc:74] Next update check in 2m39s Jan 28 06:19:06.943449 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 06:19:06.941691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 06:19:06.941721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 06:19:06.943334 systemd[1]: Started update-engine.service - Update Engine. Jan 28 06:19:06.960568 tar[1559]: linux-amd64/LICENSE Jan 28 06:19:06.960568 tar[1559]: linux-amd64/helm Jan 28 06:19:06.957416 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 06:19:06.987557 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 28 06:19:06.965461 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 06:19:06.968429 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 06:19:06.998674 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 06:19:06.999150 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 06:19:07.019335 jq[1567]: true Jan 28 06:19:07.047867 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 06:19:07.067160 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 06:19:07.242977 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 06:19:07.244586 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 06:19:07.245362 dbus-daemon[1539]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1587 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 06:19:07.257145 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 06:19:07.316901 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Jan 28 06:19:07.318310 systemd-logind[1552]: New seat seat0. Jan 28 06:19:07.321269 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 06:19:07.327562 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 06:19:07.328905 systemd[1]: Starting sshkeys.service... Jan 28 06:19:07.335333 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 06:19:07.349854 kernel: ACPI: button: Power Button [PWRF] Jan 28 06:19:07.361549 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 06:19:07.368394 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 06:19:07.375325 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 06:19:07.445363 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 06:19:07.449866 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 06:19:07.472544 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:07.563547 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 28 06:19:07.589102 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 06:19:07.589102 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 28 06:19:07.589102 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 28 06:19:07.596282 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Jan 28 06:19:07.593687 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 06:19:07.605675 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 06:19:07.596792 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 06:19:07.645640 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 06:19:07.652390 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 06:19:07.678013 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 06:19:07.679406 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 06:19:07.685881 polkitd[1613]: Started polkitd version 126 Jan 28 06:19:07.688176 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 06:19:07.697341 polkitd[1613]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 06:19:07.697818 polkitd[1613]: Loading rules from directory /run/polkit-1/rules.d Jan 28 06:19:07.697892 polkitd[1613]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 06:19:07.698235 polkitd[1613]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 28 06:19:07.698272 polkitd[1613]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 06:19:07.698325 polkitd[1613]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 06:19:07.701340 polkitd[1613]: Finished loading, compiling and executing 2 rules Jan 28 06:19:07.702283 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 06:19:07.702972 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 06:19:07.704894 polkitd[1613]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 06:19:07.755632 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 06:19:07.760688 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 06:19:07.765178 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 06:19:07.768238 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 06:19:07.790062 systemd-hostnamed[1587]: Hostname set to (static) Jan 28 06:19:07.874286 containerd[1577]: time="2026-01-28T06:19:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 06:19:07.879972 containerd[1577]: time="2026-01-28T06:19:07.877854900Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 28 06:19:07.934393 containerd[1577]: time="2026-01-28T06:19:07.934283618Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.819µs" Jan 28 06:19:07.934916 containerd[1577]: time="2026-01-28T06:19:07.934883723Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 06:19:07.935607 containerd[1577]: time="2026-01-28T06:19:07.935567940Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 06:19:07.936030 containerd[1577]: time="2026-01-28T06:19:07.936001115Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939132248Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939211649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939334208Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939358373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939645756Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939670853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939688885Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939703436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.939921831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.940399350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 06:19:07.940543 containerd[1577]: time="2026-01-28T06:19:07.940447604Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 06:19:07.941042 containerd[1577]: time="2026-01-28T06:19:07.940464620Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 06:19:07.944587 containerd[1577]: time="2026-01-28T06:19:07.944034210Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 06:19:07.944587 containerd[1577]: time="2026-01-28T06:19:07.944308782Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 06:19:07.944587 containerd[1577]: time="2026-01-28T06:19:07.944423434Z" level=info msg="metadata content store policy set" policy=shared Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951714286Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951810004Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951837598Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951859565Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951903366Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951928207Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951959058Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.951989142Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.952009877Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.952028517Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.952044415Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.952065822Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.952258694Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 06:19:07.952599 containerd[1577]: time="2026-01-28T06:19:07.952310167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952374716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952402728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952423472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952441676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952471865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952487922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952534946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 06:19:07.953161 containerd[1577]: time="2026-01-28T06:19:07.952557788Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 06:19:07.957188 containerd[1577]: time="2026-01-28T06:19:07.956256539Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 06:19:07.957188 containerd[1577]: time="2026-01-28T06:19:07.957059379Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 06:19:07.957188 containerd[1577]: time="2026-01-28T06:19:07.957110463Z" level=info msg="Start snapshots syncer" Jan 28 06:19:07.960768 containerd[1577]: time="2026-01-28T06:19:07.957156500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 06:19:07.962445 containerd[1577]: time="2026-01-28T06:19:07.962056882Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 06:19:07.962445 containerd[1577]: time="2026-01-28T06:19:07.962208582Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 06:19:07.962821 containerd[1577]: time="2026-01-28T06:19:07.962355813Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 06:19:07.963591 containerd[1577]: time="2026-01-28T06:19:07.963414146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.964919829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.964954223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.964997267Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.965023132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.965083119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.965114740Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 06:19:07.965543 containerd[1577]: time="2026-01-28T06:19:07.965187519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 06:19:07.966549 containerd[1577]: time="2026-01-28T06:19:07.965215078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 06:19:07.966549 containerd[1577]: time="2026-01-28T06:19:07.966148648Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 06:19:07.966549 containerd[1577]: time="2026-01-28T06:19:07.966236241Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 06:19:07.966549 containerd[1577]: time="2026-01-28T06:19:07.966287787Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 06:19:07.966549 containerd[1577]: time="2026-01-28T06:19:07.966307406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 06:19:07.966549 containerd[1577]: time="2026-01-28T06:19:07.966324054Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 06:19:07.967759 containerd[1577]: time="2026-01-28T06:19:07.966933750Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 06:19:07.967759 containerd[1577]: time="2026-01-28T06:19:07.966967923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 06:19:07.968543 containerd[1577]: time="2026-01-28T06:19:07.968100177Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 06:19:07.968543 containerd[1577]: time="2026-01-28T06:19:07.968141919Z" level=info msg="runtime interface created" Jan 28 06:19:07.968543 containerd[1577]: time="2026-01-28T06:19:07.968154672Z" level=info msg="created NRI interface" Jan 28 06:19:07.968543 containerd[1577]: time="2026-01-28T06:19:07.968209413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 06:19:07.968543 containerd[1577]: time="2026-01-28T06:19:07.968237168Z" level=info msg="Connect containerd service" Jan 28 06:19:07.968829 containerd[1577]: time="2026-01-28T06:19:07.968802337Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 06:19:07.976910 containerd[1577]: time="2026-01-28T06:19:07.975682927Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 06:19:08.035505 systemd-networkd[1494]: eth0: Gained IPv6LL Jan 28 06:19:08.038603 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 28 06:19:08.044690 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 06:19:08.048771 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 06:19:08.054851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:19:08.061227 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.254498287Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.256940488Z" level=info msg="Start subscribing containerd event" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257166318Z" level=info msg="Start recovering state" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257332397Z" level=info msg="Start event monitor" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257363438Z" level=info msg="Start cni network conf syncer for default" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257376875Z" level=info msg="Start streaming server" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257400012Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257412013Z" level=info msg="runtime interface starting up..." Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257430898Z" level=info msg="starting plugins..." Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257452476Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.257798733Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 06:19:08.275187 containerd[1577]: time="2026-01-28T06:19:08.273633247Z" level=info msg="containerd successfully booted in 0.401326s" Jan 28 06:19:08.258069 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 06:19:08.287250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:19:08.347264 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 06:19:08.411594 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 06:19:08.593746 systemd-logind[1552]: Watching system buttons on /dev/input/event3 (Power Button) Jan 28 06:19:09.065198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:19:09.134556 tar[1559]: linux-amd64/README.md Jan 28 06:19:09.157168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 06:19:09.525928 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 28 06:19:09.531651 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 28 06:19:09.895754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:19:09.905546 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:09.905626 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:09.909991 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:19:10.614960 systemd-networkd[1494]: eth0: Ignoring DHCPv6 address 2a02:1348:179:93b7:24:19ff:fee6:4ede/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:93b7:24:19ff:fee6:4ede/64 assigned by NDisc. Jan 28 06:19:10.615417 systemd-networkd[1494]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 06:19:10.618232 kubelet[1705]: E0128 06:19:10.618115 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:19:10.621255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:19:10.621701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:19:10.622896 systemd[1]: kubelet.service: Consumed 1.528s CPU time, 267.4M memory peak. Jan 28 06:19:10.756693 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 28 06:19:10.760742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 06:19:10.763829 systemd[1]: Started sshd@0-10.230.78.222:22-68.220.241.50:43354.service - OpenSSH per-connection server daemon (68.220.241.50:43354). Jan 28 06:19:11.363595 sshd[1717]: Accepted publickey for core from 68.220.241.50 port 43354 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:11.365695 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:11.390426 systemd-logind[1552]: New session 1 of user core. Jan 28 06:19:11.397278 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 06:19:11.400853 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 06:19:11.451620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 06:19:11.456940 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 06:19:11.474634 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 06:19:11.479377 systemd-logind[1552]: New session c1 of user core. Jan 28 06:19:11.678151 systemd[1722]: Queued start job for default target default.target. Jan 28 06:19:11.701918 systemd[1722]: Created slice app.slice - User Application Slice. Jan 28 06:19:11.702128 systemd[1722]: Reached target paths.target - Paths. Jan 28 06:19:11.702219 systemd[1722]: Reached target timers.target - Timers. Jan 28 06:19:11.704618 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 06:19:11.738969 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 06:19:11.739150 systemd[1722]: Reached target sockets.target - Sockets. Jan 28 06:19:11.739213 systemd[1722]: Reached target basic.target - Basic System. Jan 28 06:19:11.739370 systemd[1722]: Reached target default.target - Main User Target. Jan 28 06:19:11.739435 systemd[1722]: Startup finished in 250ms. Jan 28 06:19:11.739559 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 06:19:11.750842 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 06:19:11.922574 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:11.926543 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:12.177563 systemd[1]: Started sshd@1-10.230.78.222:22-68.220.241.50:33408.service - OpenSSH per-connection server daemon (68.220.241.50:33408). Jan 28 06:19:12.770086 sshd[1735]: Accepted publickey for core from 68.220.241.50 port 33408 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:12.771827 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:12.778705 systemd-logind[1552]: New session 2 of user core. Jan 28 06:19:12.790020 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 06:19:12.835312 login[1655]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 06:19:12.845590 systemd-logind[1552]: New session 3 of user core. Jan 28 06:19:12.851394 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 06:19:12.873753 login[1654]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 06:19:12.883613 systemd-logind[1552]: New session 4 of user core. Jan 28 06:19:12.890958 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 06:19:13.170806 sshd[1738]: Connection closed by 68.220.241.50 port 33408 Jan 28 06:19:13.172345 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:13.179794 systemd[1]: sshd@1-10.230.78.222:22-68.220.241.50:33408.service: Deactivated successfully. Jan 28 06:19:13.182821 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 06:19:13.185423 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Jan 28 06:19:13.187266 systemd-logind[1552]: Removed session 2. Jan 28 06:19:13.277036 systemd[1]: Started sshd@2-10.230.78.222:22-68.220.241.50:33420.service - OpenSSH per-connection server daemon (68.220.241.50:33420). Jan 28 06:19:13.864577 sshd[1769]: Accepted publickey for core from 68.220.241.50 port 33420 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:13.866318 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:13.874012 systemd-logind[1552]: New session 5 of user core. Jan 28 06:19:13.885856 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 06:19:14.268356 sshd[1772]: Connection closed by 68.220.241.50 port 33420 Jan 28 06:19:14.269180 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:14.275013 systemd[1]: sshd@2-10.230.78.222:22-68.220.241.50:33420.service: Deactivated successfully. Jan 28 06:19:14.278431 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 06:19:14.280233 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Jan 28 06:19:14.283045 systemd-logind[1552]: Removed session 5. Jan 28 06:19:15.953582 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:15.959554 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:19:15.967826 coreos-metadata[1618]: Jan 28 06:19:15.967 WARN failed to locate config-drive, using the metadata service API instead Jan 28 06:19:15.970631 coreos-metadata[1538]: Jan 28 06:19:15.970 WARN failed to locate config-drive, using the metadata service API instead Jan 28 06:19:15.991446 coreos-metadata[1618]: Jan 28 06:19:15.991 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 28 06:19:15.991940 coreos-metadata[1538]: Jan 28 06:19:15.991 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 28 06:19:16.001941 coreos-metadata[1538]: Jan 28 06:19:16.001 INFO Fetch failed with 404: resource not found Jan 28 06:19:16.001941 coreos-metadata[1538]: Jan 28 06:19:16.001 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 06:19:16.002792 coreos-metadata[1538]: Jan 28 06:19:16.002 INFO Fetch successful Jan 28 06:19:16.002959 coreos-metadata[1538]: Jan 28 06:19:16.002 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 28 06:19:16.021790 coreos-metadata[1538]: Jan 28 06:19:16.021 INFO Fetch successful Jan 28 06:19:16.021790 coreos-metadata[1538]: Jan 28 06:19:16.021 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 28 06:19:16.024320 coreos-metadata[1618]: Jan 28 06:19:16.024 INFO Fetch successful Jan 28 06:19:16.024584 coreos-metadata[1618]: Jan 28 06:19:16.024 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 06:19:16.041744 coreos-metadata[1538]: Jan 28 06:19:16.041 INFO Fetch successful Jan 28 06:19:16.041744 coreos-metadata[1538]: Jan 28 06:19:16.041 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 28 06:19:16.050925 coreos-metadata[1618]: Jan 28 06:19:16.050 INFO Fetch successful Jan 28 06:19:16.053167 unknown[1618]: wrote ssh authorized keys file for user: core Jan 28 06:19:16.057357 coreos-metadata[1538]: Jan 28 06:19:16.056 INFO Fetch successful Jan 28 06:19:16.057666 coreos-metadata[1538]: Jan 28 06:19:16.057 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 28 06:19:16.082652 update-ssh-keys[1781]: Updated "/home/core/.ssh/authorized_keys" Jan 28 06:19:16.083849 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 06:19:16.087209 systemd[1]: Finished sshkeys.service. Jan 28 06:19:16.088084 coreos-metadata[1538]: Jan 28 06:19:16.088 INFO Fetch successful Jan 28 06:19:16.131802 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 06:19:16.132890 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 06:19:16.133126 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 06:19:16.133643 systemd[1]: Startup finished in 3.552s (kernel) + 15.260s (initrd) + 13.276s (userspace) = 32.090s. Jan 28 06:19:20.768474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 06:19:20.770993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:19:20.992566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:19:21.005016 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:19:21.095249 kubelet[1798]: E0128 06:19:21.095002 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:19:21.100260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:19:21.100724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:19:21.101690 systemd[1]: kubelet.service: Consumed 252ms CPU time, 108.8M memory peak. Jan 28 06:19:24.385911 systemd[1]: Started sshd@3-10.230.78.222:22-68.220.241.50:45710.service - OpenSSH per-connection server daemon (68.220.241.50:45710). Jan 28 06:19:24.964816 sshd[1806]: Accepted publickey for core from 68.220.241.50 port 45710 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:24.966724 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:24.974592 systemd-logind[1552]: New session 6 of user core. Jan 28 06:19:24.981768 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 06:19:25.367193 sshd[1809]: Connection closed by 68.220.241.50 port 45710 Jan 28 06:19:25.368088 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:25.372274 systemd[1]: sshd@3-10.230.78.222:22-68.220.241.50:45710.service: Deactivated successfully. Jan 28 06:19:25.375132 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 06:19:25.378231 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Jan 28 06:19:25.379858 systemd-logind[1552]: Removed session 6. Jan 28 06:19:25.468065 systemd[1]: Started sshd@4-10.230.78.222:22-68.220.241.50:45720.service - OpenSSH per-connection server daemon (68.220.241.50:45720). Jan 28 06:19:26.045185 sshd[1815]: Accepted publickey for core from 68.220.241.50 port 45720 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:26.047359 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:26.055399 systemd-logind[1552]: New session 7 of user core. Jan 28 06:19:26.065730 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 06:19:26.440792 sshd[1818]: Connection closed by 68.220.241.50 port 45720 Jan 28 06:19:26.441827 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:26.447713 systemd[1]: sshd@4-10.230.78.222:22-68.220.241.50:45720.service: Deactivated successfully. Jan 28 06:19:26.449992 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 06:19:26.451097 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Jan 28 06:19:26.453115 systemd-logind[1552]: Removed session 7. Jan 28 06:19:26.546226 systemd[1]: Started sshd@5-10.230.78.222:22-68.220.241.50:45722.service - OpenSSH per-connection server daemon (68.220.241.50:45722). Jan 28 06:19:27.123230 sshd[1824]: Accepted publickey for core from 68.220.241.50 port 45722 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:27.124946 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:27.133494 systemd-logind[1552]: New session 8 of user core. Jan 28 06:19:27.140819 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 06:19:27.523405 sshd[1827]: Connection closed by 68.220.241.50 port 45722 Jan 28 06:19:27.524203 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:27.530822 systemd[1]: sshd@5-10.230.78.222:22-68.220.241.50:45722.service: Deactivated successfully. Jan 28 06:19:27.533626 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 06:19:27.535851 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Jan 28 06:19:27.537684 systemd-logind[1552]: Removed session 8. Jan 28 06:19:27.637020 systemd[1]: Started sshd@6-10.230.78.222:22-68.220.241.50:45732.service - OpenSSH per-connection server daemon (68.220.241.50:45732). Jan 28 06:19:28.242261 sshd[1833]: Accepted publickey for core from 68.220.241.50 port 45732 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:28.244103 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:28.251583 systemd-logind[1552]: New session 9 of user core. Jan 28 06:19:28.261727 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 06:19:28.577722 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 06:19:28.578183 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:19:28.592035 sudo[1837]: pam_unix(sudo:session): session closed for user root Jan 28 06:19:28.688226 sshd[1836]: Connection closed by 68.220.241.50 port 45732 Jan 28 06:19:28.687042 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:28.692438 systemd[1]: sshd@6-10.230.78.222:22-68.220.241.50:45732.service: Deactivated successfully. Jan 28 06:19:28.694616 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 06:19:28.697327 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Jan 28 06:19:28.699015 systemd-logind[1552]: Removed session 9. Jan 28 06:19:28.787991 systemd[1]: Started sshd@7-10.230.78.222:22-68.220.241.50:45736.service - OpenSSH per-connection server daemon (68.220.241.50:45736). Jan 28 06:19:29.375722 sshd[1843]: Accepted publickey for core from 68.220.241.50 port 45736 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:29.378167 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:29.386184 systemd-logind[1552]: New session 10 of user core. Jan 28 06:19:29.392772 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 06:19:29.691695 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 06:19:29.692921 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:19:29.705974 sudo[1848]: pam_unix(sudo:session): session closed for user root Jan 28 06:19:29.717882 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 06:19:29.718386 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:19:29.739011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 06:19:29.808054 augenrules[1870]: No rules Jan 28 06:19:29.809379 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 06:19:29.809927 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 06:19:29.813129 sudo[1847]: pam_unix(sudo:session): session closed for user root Jan 28 06:19:29.901994 sshd[1846]: Connection closed by 68.220.241.50 port 45736 Jan 28 06:19:29.902690 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:29.908690 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Jan 28 06:19:29.910203 systemd[1]: sshd@7-10.230.78.222:22-68.220.241.50:45736.service: Deactivated successfully. Jan 28 06:19:29.913203 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 06:19:29.915475 systemd-logind[1552]: Removed session 10. Jan 28 06:19:30.003170 systemd[1]: Started sshd@8-10.230.78.222:22-68.220.241.50:45748.service - OpenSSH per-connection server daemon (68.220.241.50:45748). Jan 28 06:19:30.581965 sshd[1879]: Accepted publickey for core from 68.220.241.50 port 45748 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:19:30.584419 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:30.591583 systemd-logind[1552]: New session 11 of user core. Jan 28 06:19:30.598731 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 06:19:30.898326 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 06:19:30.898787 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:19:31.270712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 06:19:31.276497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:19:31.562958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:19:31.580162 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:19:31.680644 kubelet[1908]: E0128 06:19:31.680560 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:19:31.684031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:19:31.684449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:19:31.685398 systemd[1]: kubelet.service: Consumed 267ms CPU time, 109.7M memory peak. Jan 28 06:19:31.778940 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 06:19:31.807287 (dockerd)[1915]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 06:19:32.380809 dockerd[1915]: time="2026-01-28T06:19:32.380550634Z" level=info msg="Starting up" Jan 28 06:19:32.386435 dockerd[1915]: time="2026-01-28T06:19:32.386207138Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 06:19:32.406541 dockerd[1915]: time="2026-01-28T06:19:32.406334473Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 06:19:32.469431 dockerd[1915]: time="2026-01-28T06:19:32.469117024Z" level=info msg="Loading containers: start." Jan 28 06:19:32.499569 kernel: Initializing XFRM netlink socket Jan 28 06:19:32.809600 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 28 06:19:32.869511 systemd-networkd[1494]: docker0: Link UP Jan 28 06:19:32.874869 dockerd[1915]: time="2026-01-28T06:19:32.874728116Z" level=info msg="Loading containers: done." Jan 28 06:19:32.902554 dockerd[1915]: time="2026-01-28T06:19:32.902203438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 06:19:32.902554 dockerd[1915]: time="2026-01-28T06:19:32.902303382Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 06:19:32.902554 dockerd[1915]: time="2026-01-28T06:19:32.902443150Z" level=info msg="Initializing buildkit" Jan 28 06:19:32.932112 dockerd[1915]: time="2026-01-28T06:19:32.932054875Z" level=info msg="Completed buildkit initialization" Jan 28 06:19:32.943451 dockerd[1915]: time="2026-01-28T06:19:32.943403633Z" level=info msg="Daemon has completed initialization" Jan 28 06:19:32.943786 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 06:19:32.946262 dockerd[1915]: time="2026-01-28T06:19:32.945236311Z" level=info msg="API listen on /run/docker.sock" Jan 28 06:19:33.841556 systemd-resolved[1446]: Clock change detected. Flushing caches. Jan 28 06:19:33.843168 systemd-timesyncd[1469]: Contacted time server [2a01:7e00::f03c:94ff:fee2:9c9a]:123 (2.flatcar.pool.ntp.org). Jan 28 06:19:33.843273 systemd-timesyncd[1469]: Initial clock synchronization to Wed 2026-01-28 06:19:33.841307 UTC. Jan 28 06:19:34.825882 containerd[1577]: time="2026-01-28T06:19:34.825741090Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 28 06:19:35.578789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370114402.mount: Deactivated successfully. Jan 28 06:19:37.860416 containerd[1577]: time="2026-01-28T06:19:37.860352596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:37.865788 containerd[1577]: time="2026-01-28T06:19:37.865752752Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 28 06:19:37.870093 containerd[1577]: time="2026-01-28T06:19:37.870025446Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:37.875119 containerd[1577]: time="2026-01-28T06:19:37.874504645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:37.877029 containerd[1577]: time="2026-01-28T06:19:37.876993254Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.051085158s" Jan 28 06:19:37.877236 containerd[1577]: time="2026-01-28T06:19:37.877197328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 28 06:19:37.881361 containerd[1577]: time="2026-01-28T06:19:37.881330142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 28 06:19:41.093546 containerd[1577]: time="2026-01-28T06:19:41.093485988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:41.094938 containerd[1577]: time="2026-01-28T06:19:41.094903073Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 28 06:19:41.096162 containerd[1577]: time="2026-01-28T06:19:41.095849306Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:41.100493 containerd[1577]: time="2026-01-28T06:19:41.099615315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:41.101341 containerd[1577]: time="2026-01-28T06:19:41.101303067Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.219848987s" Jan 28 06:19:41.101416 containerd[1577]: time="2026-01-28T06:19:41.101343127Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 28 06:19:41.102369 containerd[1577]: time="2026-01-28T06:19:41.102318904Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 28 06:19:41.192209 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 06:19:42.298091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 06:19:42.303298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:19:42.702451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:19:42.717565 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:19:42.867804 kubelet[2206]: E0128 06:19:42.867697 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:19:42.871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:19:42.871749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:19:42.872870 systemd[1]: kubelet.service: Consumed 440ms CPU time, 108.3M memory peak. Jan 28 06:19:43.081365 containerd[1577]: time="2026-01-28T06:19:43.081302318Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 28 06:19:43.082209 containerd[1577]: time="2026-01-28T06:19:43.082172791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:43.085901 containerd[1577]: time="2026-01-28T06:19:43.085866086Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:43.087600 containerd[1577]: time="2026-01-28T06:19:43.087561654Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.985198136s" Jan 28 06:19:43.087676 containerd[1577]: time="2026-01-28T06:19:43.087603519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 28 06:19:43.088399 containerd[1577]: time="2026-01-28T06:19:43.088368567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 28 06:19:43.089555 containerd[1577]: time="2026-01-28T06:19:43.089503920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:46.207750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966204763.mount: Deactivated successfully. Jan 28 06:19:47.204480 containerd[1577]: time="2026-01-28T06:19:47.204404516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:47.206441 containerd[1577]: time="2026-01-28T06:19:47.206156518Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 28 06:19:47.207313 containerd[1577]: time="2026-01-28T06:19:47.207270383Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:47.209924 containerd[1577]: time="2026-01-28T06:19:47.209885173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:47.211020 containerd[1577]: time="2026-01-28T06:19:47.210983182Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.122571818s" Jan 28 06:19:47.211172 containerd[1577]: time="2026-01-28T06:19:47.211144611Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 28 06:19:47.211805 containerd[1577]: time="2026-01-28T06:19:47.211768726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 28 06:19:47.846607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686492228.mount: Deactivated successfully. Jan 28 06:19:49.526706 containerd[1577]: time="2026-01-28T06:19:49.526634610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:49.528047 containerd[1577]: time="2026-01-28T06:19:49.528014663Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 28 06:19:49.530112 containerd[1577]: time="2026-01-28T06:19:49.528779971Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:49.534038 containerd[1577]: time="2026-01-28T06:19:49.533987733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:49.535444 containerd[1577]: time="2026-01-28T06:19:49.535404495Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.323594683s" Jan 28 06:19:49.535519 containerd[1577]: time="2026-01-28T06:19:49.535447872Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 28 06:19:49.536113 containerd[1577]: time="2026-01-28T06:19:49.535980912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 06:19:50.415535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94885821.mount: Deactivated successfully. Jan 28 06:19:50.428751 containerd[1577]: time="2026-01-28T06:19:50.428645153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:19:50.430467 containerd[1577]: time="2026-01-28T06:19:50.430429330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 28 06:19:50.431957 containerd[1577]: time="2026-01-28T06:19:50.431865350Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:19:50.435340 containerd[1577]: time="2026-01-28T06:19:50.435251294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:19:50.436688 containerd[1577]: time="2026-01-28T06:19:50.436607007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 900.578339ms" Jan 28 06:19:50.436688 containerd[1577]: time="2026-01-28T06:19:50.436650914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 06:19:50.437885 containerd[1577]: time="2026-01-28T06:19:50.437372054Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 28 06:19:51.062873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98413526.mount: Deactivated successfully. Jan 28 06:19:52.577162 update_engine[1554]: I20260128 06:19:52.576344 1554 update_attempter.cc:509] Updating boot flags... Jan 28 06:19:52.882021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 06:19:52.899978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:19:53.175181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:19:53.187828 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:19:53.265435 kubelet[2351]: E0128 06:19:53.265376 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:19:53.268619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:19:53.268865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:19:53.269457 systemd[1]: kubelet.service: Consumed 234ms CPU time, 107.9M memory peak. Jan 28 06:19:56.331763 containerd[1577]: time="2026-01-28T06:19:56.331675735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:56.333187 containerd[1577]: time="2026-01-28T06:19:56.333118869Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 28 06:19:56.334352 containerd[1577]: time="2026-01-28T06:19:56.334311821Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:56.340107 containerd[1577]: time="2026-01-28T06:19:56.339712627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:19:56.341482 containerd[1577]: time="2026-01-28T06:19:56.341442013Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.904031969s" Jan 28 06:19:56.341564 containerd[1577]: time="2026-01-28T06:19:56.341523457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 28 06:20:01.172188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:20:01.173003 systemd[1]: kubelet.service: Consumed 234ms CPU time, 107.9M memory peak. Jan 28 06:20:01.176273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:20:01.215321 systemd[1]: Reload requested from client PID 2392 ('systemctl') (unit session-11.scope)... Jan 28 06:20:01.215377 systemd[1]: Reloading... Jan 28 06:20:01.408095 zram_generator::config[2437]: No configuration found. Jan 28 06:20:01.737795 systemd[1]: Reloading finished in 521 ms. Jan 28 06:20:01.812905 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 06:20:01.813377 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 06:20:01.814035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:20:01.814303 systemd[1]: kubelet.service: Consumed 156ms CPU time, 97.8M memory peak. Jan 28 06:20:01.816948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:20:01.989009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:20:02.002575 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 06:20:02.088323 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:20:02.088323 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 06:20:02.088323 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:20:02.091143 kubelet[2504]: I0128 06:20:02.090651 2504 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:20:02.620156 kubelet[2504]: I0128 06:20:02.620112 2504 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 06:20:02.622000 kubelet[2504]: I0128 06:20:02.620351 2504 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:20:02.622000 kubelet[2504]: I0128 06:20:02.620779 2504 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 06:20:02.667648 kubelet[2504]: E0128 06:20:02.666997 2504 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.78.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 06:20:02.668724 kubelet[2504]: I0128 06:20:02.668678 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 06:20:02.690218 kubelet[2504]: I0128 06:20:02.690166 2504 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:20:02.700145 kubelet[2504]: I0128 06:20:02.700115 2504 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 06:20:02.703817 kubelet[2504]: I0128 06:20:02.703774 2504 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:20:02.706873 kubelet[2504]: I0128 06:20:02.703926 2504 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-4e3e3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:20:02.707305 kubelet[2504]: I0128 06:20:02.707282 2504 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:20:02.707421 kubelet[2504]: I0128 06:20:02.707402 2504 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 06:20:02.708560 kubelet[2504]: I0128 06:20:02.708538 2504 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:20:02.711056 kubelet[2504]: I0128 06:20:02.711032 2504 kubelet.go:480] "Attempting to sync node with API server" Jan 28 06:20:02.711243 kubelet[2504]: I0128 06:20:02.711221 2504 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:20:02.711446 kubelet[2504]: I0128 06:20:02.711425 2504 kubelet.go:386] "Adding apiserver pod source" Jan 28 06:20:02.713209 kubelet[2504]: I0128 06:20:02.713181 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:20:02.763875 kubelet[2504]: E0128 06:20:02.763697 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.78.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 06:20:02.763875 kubelet[2504]: E0128 06:20:02.763865 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.78.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4e3e3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 06:20:02.764197 kubelet[2504]: I0128 06:20:02.764034 2504 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 06:20:02.765361 kubelet[2504]: I0128 06:20:02.764746 2504 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 06:20:02.768295 kubelet[2504]: W0128 06:20:02.768264 2504 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 06:20:02.776212 kubelet[2504]: I0128 06:20:02.776176 2504 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 06:20:02.776306 kubelet[2504]: I0128 06:20:02.776270 2504 server.go:1289] "Started kubelet" Jan 28 06:20:02.778284 kubelet[2504]: I0128 06:20:02.778026 2504 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:20:02.781307 kubelet[2504]: I0128 06:20:02.780871 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:20:02.781562 kubelet[2504]: I0128 06:20:02.781534 2504 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:20:02.788173 kubelet[2504]: I0128 06:20:02.787452 2504 server.go:317] "Adding debug handlers to kubelet server" Jan 28 06:20:02.792103 kubelet[2504]: E0128 06:20:02.788982 2504 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.78.222:6443/api/v1/namespaces/default/events\": dial tcp 10.230.78.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-4e3e3.gb1.brightbox.com.188ed0ba70af47e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-4e3e3.gb1.brightbox.com,UID:srv-4e3e3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-4e3e3.gb1.brightbox.com,},FirstTimestamp:2026-01-28 06:20:02.77621348 +0000 UTC m=+0.767906811,LastTimestamp:2026-01-28 06:20:02.77621348 +0000 UTC m=+0.767906811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-4e3e3.gb1.brightbox.com,}" Jan 28 06:20:02.794424 kubelet[2504]: I0128 06:20:02.794400 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:20:02.795495 kubelet[2504]: I0128 06:20:02.795243 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 06:20:02.806581 kubelet[2504]: I0128 06:20:02.806543 2504 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 06:20:02.806971 kubelet[2504]: E0128 06:20:02.806940 2504 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" Jan 28 06:20:02.816394 kubelet[2504]: I0128 06:20:02.816356 2504 factory.go:223] Registration of the systemd container factory successfully Jan 28 06:20:02.816653 kubelet[2504]: I0128 06:20:02.816624 2504 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 06:20:02.818101 kubelet[2504]: E0128 06:20:02.817818 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4e3e3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.222:6443: connect: connection refused" interval="200ms" Jan 28 06:20:02.819796 kubelet[2504]: I0128 06:20:02.819767 2504 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 06:20:02.819910 kubelet[2504]: I0128 06:20:02.819889 2504 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:20:02.820451 kubelet[2504]: E0128 06:20:02.820405 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.78.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 06:20:02.823279 kubelet[2504]: I0128 06:20:02.823250 2504 factory.go:223] Registration of the containerd container factory successfully Jan 28 06:20:02.839362 kubelet[2504]: E0128 06:20:02.839262 2504 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 06:20:02.840533 kubelet[2504]: I0128 06:20:02.840339 2504 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 06:20:02.842199 kubelet[2504]: I0128 06:20:02.842176 2504 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 06:20:02.842330 kubelet[2504]: I0128 06:20:02.842301 2504 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 06:20:02.842488 kubelet[2504]: I0128 06:20:02.842465 2504 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 06:20:02.842594 kubelet[2504]: I0128 06:20:02.842576 2504 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 06:20:02.842767 kubelet[2504]: E0128 06:20:02.842732 2504 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:20:02.855488 kubelet[2504]: E0128 06:20:02.855443 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.78.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 06:20:02.857843 kubelet[2504]: I0128 06:20:02.857512 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 06:20:02.857843 kubelet[2504]: I0128 06:20:02.857535 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 06:20:02.857843 kubelet[2504]: I0128 06:20:02.857566 2504 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:20:02.860745 kubelet[2504]: I0128 06:20:02.860720 2504 policy_none.go:49] "None policy: Start" Jan 28 06:20:02.860962 kubelet[2504]: I0128 06:20:02.860940 2504 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 06:20:02.861110 kubelet[2504]: I0128 06:20:02.861091 2504 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:20:02.871644 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 06:20:02.885170 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 06:20:02.890684 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 06:20:02.907531 kubelet[2504]: E0128 06:20:02.907491 2504 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" Jan 28 06:20:02.911178 kubelet[2504]: E0128 06:20:02.910575 2504 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 06:20:02.911178 kubelet[2504]: I0128 06:20:02.910850 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:20:02.911178 kubelet[2504]: I0128 06:20:02.910879 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:20:02.912887 kubelet[2504]: I0128 06:20:02.912865 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:20:02.916776 kubelet[2504]: E0128 06:20:02.916747 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 06:20:02.917208 kubelet[2504]: E0128 06:20:02.917185 2504 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-4e3e3.gb1.brightbox.com\" not found" Jan 28 06:20:03.014537 kubelet[2504]: I0128 06:20:03.014452 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.016048 kubelet[2504]: E0128 06:20:03.016001 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.222:6443/api/v1/nodes\": dial tcp 10.230.78.222:6443: connect: connection refused" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.019262 kubelet[2504]: E0128 06:20:03.018882 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4e3e3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.222:6443: connect: connection refused" interval="400ms" Jan 28 06:20:03.026058 systemd[1]: Created slice kubepods-burstable-podd7cd41b75da08b311fca3e5b94d4285c.slice - libcontainer container kubepods-burstable-podd7cd41b75da08b311fca3e5b94d4285c.slice. Jan 28 06:20:03.046354 kubelet[2504]: E0128 06:20:03.046314 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.051562 systemd[1]: Created slice kubepods-burstable-poda89b4af11c8fc97582fad12fa408085e.slice - libcontainer container kubepods-burstable-poda89b4af11c8fc97582fad12fa408085e.slice. Jan 28 06:20:03.055500 kubelet[2504]: E0128 06:20:03.055468 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.059544 systemd[1]: Created slice kubepods-burstable-podbf4ffd207194f0325e349988fb3c86ea.slice - libcontainer container kubepods-burstable-podbf4ffd207194f0325e349988fb3c86ea.slice. Jan 28 06:20:03.062540 kubelet[2504]: E0128 06:20:03.062285 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.120807 kubelet[2504]: I0128 06:20:03.120748 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf4ffd207194f0325e349988fb3c86ea-kubeconfig\") pod \"kube-scheduler-srv-4e3e3.gb1.brightbox.com\" (UID: \"bf4ffd207194f0325e349988fb3c86ea\") " pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.121978 kubelet[2504]: I0128 06:20:03.121540 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd41b75da08b311fca3e5b94d4285c-k8s-certs\") pod \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" (UID: \"d7cd41b75da08b311fca3e5b94d4285c\") " pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.121978 kubelet[2504]: I0128 06:20:03.121683 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7cd41b75da08b311fca3e5b94d4285c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" (UID: \"d7cd41b75da08b311fca3e5b94d4285c\") " pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.121978 kubelet[2504]: I0128 06:20:03.121760 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-flexvolume-dir\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.121978 kubelet[2504]: I0128 06:20:03.121794 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-k8s-certs\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.121978 kubelet[2504]: I0128 06:20:03.121864 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-kubeconfig\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.122445 kubelet[2504]: I0128 06:20:03.121933 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.122445 kubelet[2504]: I0128 06:20:03.122160 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd41b75da08b311fca3e5b94d4285c-ca-certs\") pod \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" (UID: \"d7cd41b75da08b311fca3e5b94d4285c\") " pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.122773 kubelet[2504]: I0128 06:20:03.122684 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-ca-certs\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.218580 kubelet[2504]: I0128 06:20:03.218532 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.219307 kubelet[2504]: E0128 06:20:03.219215 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.222:6443/api/v1/nodes\": dial tcp 10.230.78.222:6443: connect: connection refused" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.351305 containerd[1577]: time="2026-01-28T06:20:03.348972254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-4e3e3.gb1.brightbox.com,Uid:d7cd41b75da08b311fca3e5b94d4285c,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:03.365268 containerd[1577]: time="2026-01-28T06:20:03.364698743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-4e3e3.gb1.brightbox.com,Uid:a89b4af11c8fc97582fad12fa408085e,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:03.365665 containerd[1577]: time="2026-01-28T06:20:03.365627352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-4e3e3.gb1.brightbox.com,Uid:bf4ffd207194f0325e349988fb3c86ea,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:03.420757 kubelet[2504]: E0128 06:20:03.420607 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4e3e3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.222:6443: connect: connection refused" interval="800ms" Jan 28 06:20:03.513031 containerd[1577]: time="2026-01-28T06:20:03.512620066Z" level=info msg="connecting to shim 6a59b0142b74d2dfe5ca11524e5a776b60436a18f64a155ac931026b8d10cd80" address="unix:///run/containerd/s/9c9829c4e674cdf978ad9bed9956f0a729dbde1076c8dcbfab03ab14ca133c68" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:03.514441 containerd[1577]: time="2026-01-28T06:20:03.514409277Z" level=info msg="connecting to shim c77d172e0ea788e80d168dc7ca0f8fc318ac7973a48af514e6e15f8464e32d2d" address="unix:///run/containerd/s/7f9392fec8552886966274c0e0242c4b5cec498096175e3a7bde0e796bd84fc6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:03.519401 containerd[1577]: time="2026-01-28T06:20:03.514664156Z" level=info msg="connecting to shim f35b6888b18e27e64b3f232e1c01c6c4328a7a1467c435677d42412fd9e11c6e" address="unix:///run/containerd/s/1bedab64c8d26983a4d7d05308b6133c8fe4719181ca1c1f3a9cc5ec50f6cddb" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:03.622795 kubelet[2504]: I0128 06:20:03.622340 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.622795 kubelet[2504]: E0128 06:20:03.622744 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.222:6443/api/v1/nodes\": dial tcp 10.230.78.222:6443: connect: connection refused" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:03.650340 systemd[1]: Started cri-containerd-c77d172e0ea788e80d168dc7ca0f8fc318ac7973a48af514e6e15f8464e32d2d.scope - libcontainer container c77d172e0ea788e80d168dc7ca0f8fc318ac7973a48af514e6e15f8464e32d2d. Jan 28 06:20:03.660228 systemd[1]: Started cri-containerd-6a59b0142b74d2dfe5ca11524e5a776b60436a18f64a155ac931026b8d10cd80.scope - libcontainer container 6a59b0142b74d2dfe5ca11524e5a776b60436a18f64a155ac931026b8d10cd80. Jan 28 06:20:03.665953 systemd[1]: Started cri-containerd-f35b6888b18e27e64b3f232e1c01c6c4328a7a1467c435677d42412fd9e11c6e.scope - libcontainer container f35b6888b18e27e64b3f232e1c01c6c4328a7a1467c435677d42412fd9e11c6e. Jan 28 06:20:03.774205 containerd[1577]: time="2026-01-28T06:20:03.773947805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-4e3e3.gb1.brightbox.com,Uid:d7cd41b75da08b311fca3e5b94d4285c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a59b0142b74d2dfe5ca11524e5a776b60436a18f64a155ac931026b8d10cd80\"" Jan 28 06:20:03.788602 containerd[1577]: time="2026-01-28T06:20:03.788320277Z" level=info msg="CreateContainer within sandbox \"6a59b0142b74d2dfe5ca11524e5a776b60436a18f64a155ac931026b8d10cd80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 06:20:03.819689 containerd[1577]: time="2026-01-28T06:20:03.819632580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-4e3e3.gb1.brightbox.com,Uid:a89b4af11c8fc97582fad12fa408085e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77d172e0ea788e80d168dc7ca0f8fc318ac7973a48af514e6e15f8464e32d2d\"" Jan 28 06:20:03.820313 containerd[1577]: time="2026-01-28T06:20:03.820282898Z" level=info msg="Container e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:03.826156 containerd[1577]: time="2026-01-28T06:20:03.825584960Z" level=info msg="CreateContainer within sandbox \"c77d172e0ea788e80d168dc7ca0f8fc318ac7973a48af514e6e15f8464e32d2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 06:20:03.838647 containerd[1577]: time="2026-01-28T06:20:03.838595820Z" level=info msg="CreateContainer within sandbox \"6a59b0142b74d2dfe5ca11524e5a776b60436a18f64a155ac931026b8d10cd80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0\"" Jan 28 06:20:03.840039 containerd[1577]: time="2026-01-28T06:20:03.840008297Z" level=info msg="StartContainer for \"e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0\"" Jan 28 06:20:03.841464 containerd[1577]: time="2026-01-28T06:20:03.841436168Z" level=info msg="Container bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:03.844575 containerd[1577]: time="2026-01-28T06:20:03.844542470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-4e3e3.gb1.brightbox.com,Uid:bf4ffd207194f0325e349988fb3c86ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"f35b6888b18e27e64b3f232e1c01c6c4328a7a1467c435677d42412fd9e11c6e\"" Jan 28 06:20:03.844834 containerd[1577]: time="2026-01-28T06:20:03.844584027Z" level=info msg="connecting to shim e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0" address="unix:///run/containerd/s/9c9829c4e674cdf978ad9bed9956f0a729dbde1076c8dcbfab03ab14ca133c68" protocol=ttrpc version=3 Jan 28 06:20:03.852618 containerd[1577]: time="2026-01-28T06:20:03.852577042Z" level=info msg="CreateContainer within sandbox \"f35b6888b18e27e64b3f232e1c01c6c4328a7a1467c435677d42412fd9e11c6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 06:20:03.853506 containerd[1577]: time="2026-01-28T06:20:03.853466131Z" level=info msg="CreateContainer within sandbox \"c77d172e0ea788e80d168dc7ca0f8fc318ac7973a48af514e6e15f8464e32d2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6\"" Jan 28 06:20:03.853921 containerd[1577]: time="2026-01-28T06:20:03.853857823Z" level=info msg="StartContainer for \"bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6\"" Jan 28 06:20:03.855354 containerd[1577]: time="2026-01-28T06:20:03.855309883Z" level=info msg="connecting to shim bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6" address="unix:///run/containerd/s/7f9392fec8552886966274c0e0242c4b5cec498096175e3a7bde0e796bd84fc6" protocol=ttrpc version=3 Jan 28 06:20:03.870862 containerd[1577]: time="2026-01-28T06:20:03.870812558Z" level=info msg="Container 0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:03.884472 systemd[1]: Started cri-containerd-e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0.scope - libcontainer container e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0. Jan 28 06:20:03.898835 systemd[1]: Started cri-containerd-bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6.scope - libcontainer container bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6. Jan 28 06:20:03.918713 containerd[1577]: time="2026-01-28T06:20:03.918649367Z" level=info msg="CreateContainer within sandbox \"f35b6888b18e27e64b3f232e1c01c6c4328a7a1467c435677d42412fd9e11c6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9\"" Jan 28 06:20:03.920356 containerd[1577]: time="2026-01-28T06:20:03.920237429Z" level=info msg="StartContainer for \"0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9\"" Jan 28 06:20:03.923870 containerd[1577]: time="2026-01-28T06:20:03.922645360Z" level=info msg="connecting to shim 0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9" address="unix:///run/containerd/s/1bedab64c8d26983a4d7d05308b6133c8fe4719181ca1c1f3a9cc5ec50f6cddb" protocol=ttrpc version=3 Jan 28 06:20:03.936332 kubelet[2504]: E0128 06:20:03.936205 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.78.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 06:20:03.962785 systemd[1]: Started cri-containerd-0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9.scope - libcontainer container 0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9. Jan 28 06:20:04.020423 containerd[1577]: time="2026-01-28T06:20:04.020191994Z" level=info msg="StartContainer for \"e23e37a613757c2b2fd10c9a02b00b7638831d4c7c4566069f8c61fa992c49e0\" returns successfully" Jan 28 06:20:04.043012 containerd[1577]: time="2026-01-28T06:20:04.042860191Z" level=info msg="StartContainer for \"bccdb3f16a1fb1288bde3b41455b07a2c47d253ba45d2c0294cec912f0c7e8f6\" returns successfully" Jan 28 06:20:04.102551 containerd[1577]: time="2026-01-28T06:20:04.102497111Z" level=info msg="StartContainer for \"0e553d9dcf707e3dd6d5e7f205c8b462acc0d61d360a39b218f2a376949a44e9\" returns successfully" Jan 28 06:20:04.140926 kubelet[2504]: E0128 06:20:04.140867 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.78.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 06:20:04.222433 kubelet[2504]: E0128 06:20:04.222378 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4e3e3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.222:6443: connect: connection refused" interval="1.6s" Jan 28 06:20:04.228084 kubelet[2504]: E0128 06:20:04.226638 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.78.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 06:20:04.238796 kubelet[2504]: E0128 06:20:04.238751 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.78.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4e3e3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.78.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 06:20:04.425298 kubelet[2504]: I0128 06:20:04.425259 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:04.425687 kubelet[2504]: E0128 06:20:04.425653 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.222:6443/api/v1/nodes\": dial tcp 10.230.78.222:6443: connect: connection refused" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:04.924057 kubelet[2504]: E0128 06:20:04.924001 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:04.925527 kubelet[2504]: E0128 06:20:04.925500 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:04.930505 kubelet[2504]: E0128 06:20:04.930478 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:05.934354 kubelet[2504]: E0128 06:20:05.934315 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:05.935547 kubelet[2504]: E0128 06:20:05.935144 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:05.935946 kubelet[2504]: E0128 06:20:05.935914 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:06.031132 kubelet[2504]: I0128 06:20:06.030459 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:06.935678 kubelet[2504]: E0128 06:20:06.935639 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:06.936230 kubelet[2504]: E0128 06:20:06.935693 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:06.937203 kubelet[2504]: E0128 06:20:06.937160 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.209555 kubelet[2504]: E0128 06:20:07.209384 2504 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-4e3e3.gb1.brightbox.com\" not found" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.258518 kubelet[2504]: E0128 06:20:07.258349 2504 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-4e3e3.gb1.brightbox.com.188ed0ba70af47e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-4e3e3.gb1.brightbox.com,UID:srv-4e3e3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-4e3e3.gb1.brightbox.com,},FirstTimestamp:2026-01-28 06:20:02.77621348 +0000 UTC m=+0.767906811,LastTimestamp:2026-01-28 06:20:02.77621348 +0000 UTC m=+0.767906811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-4e3e3.gb1.brightbox.com,}" Jan 28 06:20:07.305196 kubelet[2504]: I0128 06:20:07.305136 2504 kubelet_node_status.go:78] "Successfully registered node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.305196 kubelet[2504]: E0128 06:20:07.305186 2504 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-4e3e3.gb1.brightbox.com\": node \"srv-4e3e3.gb1.brightbox.com\" not found" Jan 28 06:20:07.321542 kubelet[2504]: I0128 06:20:07.321293 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.342812 kubelet[2504]: E0128 06:20:07.342759 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.342812 kubelet[2504]: I0128 06:20:07.342804 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.350835 kubelet[2504]: E0128 06:20:07.350802 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4e3e3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.350835 kubelet[2504]: I0128 06:20:07.350837 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.355463 kubelet[2504]: E0128 06:20:07.355381 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:07.761425 kubelet[2504]: I0128 06:20:07.761373 2504 apiserver.go:52] "Watching apiserver" Jan 28 06:20:07.820574 kubelet[2504]: I0128 06:20:07.820507 2504 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 06:20:08.577708 kubelet[2504]: I0128 06:20:08.577650 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:08.588266 kubelet[2504]: I0128 06:20:08.588221 2504 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:20:09.543469 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-11.scope)... Jan 28 06:20:09.543906 systemd[1]: Reloading... Jan 28 06:20:09.730195 zram_generator::config[2838]: No configuration found. Jan 28 06:20:10.119451 systemd[1]: Reloading finished in 574 ms. Jan 28 06:20:10.170996 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:20:10.189913 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 06:20:10.191717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:20:10.191963 systemd[1]: kubelet.service: Consumed 1.279s CPU time, 128.9M memory peak. Jan 28 06:20:10.195973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:20:10.491906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:20:10.507318 (kubelet)[2893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 06:20:10.607270 kubelet[2893]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:20:10.609100 kubelet[2893]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 06:20:10.609100 kubelet[2893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:20:10.609100 kubelet[2893]: I0128 06:20:10.607922 2893 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:20:10.624970 kubelet[2893]: I0128 06:20:10.624930 2893 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 06:20:10.625244 kubelet[2893]: I0128 06:20:10.625225 2893 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:20:10.625755 kubelet[2893]: I0128 06:20:10.625734 2893 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 06:20:10.629720 kubelet[2893]: I0128 06:20:10.628482 2893 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 06:20:10.661993 kubelet[2893]: I0128 06:20:10.661928 2893 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 06:20:10.682818 kubelet[2893]: I0128 06:20:10.682782 2893 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:20:10.690592 sudo[2907]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 06:20:10.691502 sudo[2907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 06:20:10.694448 kubelet[2893]: I0128 06:20:10.694118 2893 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 06:20:10.696302 kubelet[2893]: I0128 06:20:10.695370 2893 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:20:10.696302 kubelet[2893]: I0128 06:20:10.695746 2893 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-4e3e3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:20:10.697230 kubelet[2893]: I0128 06:20:10.697206 2893 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:20:10.697336 kubelet[2893]: I0128 06:20:10.697320 2893 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 06:20:10.699054 kubelet[2893]: I0128 06:20:10.698740 2893 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:20:10.699539 kubelet[2893]: I0128 06:20:10.699496 2893 kubelet.go:480] "Attempting to sync node with API server" Jan 28 06:20:10.701085 kubelet[2893]: I0128 06:20:10.700164 2893 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:20:10.701085 kubelet[2893]: I0128 06:20:10.700207 2893 kubelet.go:386] "Adding apiserver pod source" Jan 28 06:20:10.701085 kubelet[2893]: I0128 06:20:10.700229 2893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:20:10.702411 kubelet[2893]: I0128 06:20:10.702388 2893 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 06:20:10.704137 kubelet[2893]: I0128 06:20:10.704015 2893 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 06:20:10.725730 kubelet[2893]: I0128 06:20:10.725701 2893 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 06:20:10.725938 kubelet[2893]: I0128 06:20:10.725921 2893 server.go:1289] "Started kubelet" Jan 28 06:20:10.732279 kubelet[2893]: I0128 06:20:10.732250 2893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:20:10.753639 kubelet[2893]: I0128 06:20:10.753371 2893 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:20:10.755082 kubelet[2893]: I0128 06:20:10.754205 2893 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:20:10.782288 kubelet[2893]: I0128 06:20:10.759704 2893 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 06:20:10.793205 kubelet[2893]: I0128 06:20:10.761214 2893 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 06:20:10.794339 kubelet[2893]: I0128 06:20:10.794306 2893 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:20:10.794414 kubelet[2893]: E0128 06:20:10.761505 2893 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4e3e3.gb1.brightbox.com\" not found" Jan 28 06:20:10.794414 kubelet[2893]: E0128 06:20:10.765286 2893 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 06:20:10.798340 kubelet[2893]: I0128 06:20:10.798290 2893 server.go:317] "Adding debug handlers to kubelet server" Jan 28 06:20:10.805927 kubelet[2893]: I0128 06:20:10.761227 2893 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 06:20:10.809575 kubelet[2893]: I0128 06:20:10.808947 2893 factory.go:223] Registration of the containerd container factory successfully Jan 28 06:20:10.809668 kubelet[2893]: I0128 06:20:10.809598 2893 factory.go:223] Registration of the systemd container factory successfully Jan 28 06:20:10.809788 kubelet[2893]: I0128 06:20:10.809706 2893 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 06:20:10.822725 kubelet[2893]: I0128 06:20:10.809585 2893 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:20:10.851051 kubelet[2893]: I0128 06:20:10.851008 2893 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 06:20:10.868120 kubelet[2893]: I0128 06:20:10.867833 2893 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 06:20:10.868120 kubelet[2893]: I0128 06:20:10.867868 2893 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 06:20:10.868120 kubelet[2893]: I0128 06:20:10.867927 2893 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 06:20:10.868120 kubelet[2893]: I0128 06:20:10.867941 2893 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 06:20:10.868120 kubelet[2893]: E0128 06:20:10.868013 2893 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:20:10.968362 kubelet[2893]: E0128 06:20:10.968309 2893 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 06:20:10.993471 kubelet[2893]: I0128 06:20:10.993415 2893 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 06:20:10.993471 kubelet[2893]: I0128 06:20:10.993442 2893 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 06:20:10.993471 kubelet[2893]: I0128 06:20:10.993480 2893 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:20:10.993737 kubelet[2893]: I0128 06:20:10.993710 2893 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 06:20:10.993786 kubelet[2893]: I0128 06:20:10.993726 2893 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 06:20:10.993786 kubelet[2893]: I0128 06:20:10.993751 2893 policy_none.go:49] "None policy: Start" Jan 28 06:20:10.993786 kubelet[2893]: I0128 06:20:10.993766 2893 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 06:20:10.995057 kubelet[2893]: I0128 06:20:10.995030 2893 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:20:10.996217 kubelet[2893]: I0128 06:20:10.996166 2893 state_mem.go:75] "Updated machine memory state" Jan 28 06:20:11.026149 kubelet[2893]: E0128 06:20:11.026009 2893 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 06:20:11.027285 kubelet[2893]: I0128 06:20:11.026355 2893 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:20:11.027285 kubelet[2893]: I0128 06:20:11.026395 2893 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:20:11.027285 kubelet[2893]: I0128 06:20:11.026769 2893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:20:11.039689 kubelet[2893]: E0128 06:20:11.037199 2893 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 06:20:11.150794 kubelet[2893]: I0128 06:20:11.150756 2893 kubelet_node_status.go:75] "Attempting to register node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.171044 kubelet[2893]: I0128 06:20:11.170744 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.173284 kubelet[2893]: I0128 06:20:11.173263 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.174221 kubelet[2893]: I0128 06:20:11.174188 2893 kubelet_node_status.go:124] "Node was previously registered" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.174338 kubelet[2893]: I0128 06:20:11.174282 2893 kubelet_node_status.go:78] "Successfully registered node" node="srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.176692 kubelet[2893]: I0128 06:20:11.176570 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.197688 kubelet[2893]: I0128 06:20:11.197092 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:20:11.197688 kubelet[2893]: I0128 06:20:11.197391 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:20:11.197688 kubelet[2893]: I0128 06:20:11.197605 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:20:11.197688 kubelet[2893]: E0128 06:20:11.197653 2893 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4e3e3.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.226120 kubelet[2893]: I0128 06:20:11.225735 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd41b75da08b311fca3e5b94d4285c-ca-certs\") pod \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" (UID: \"d7cd41b75da08b311fca3e5b94d4285c\") " pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227370 kubelet[2893]: I0128 06:20:11.226684 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7cd41b75da08b311fca3e5b94d4285c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" (UID: \"d7cd41b75da08b311fca3e5b94d4285c\") " pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227370 kubelet[2893]: I0128 06:20:11.226746 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-kubeconfig\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227370 kubelet[2893]: I0128 06:20:11.226801 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd41b75da08b311fca3e5b94d4285c-k8s-certs\") pod \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" (UID: \"d7cd41b75da08b311fca3e5b94d4285c\") " pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227370 kubelet[2893]: I0128 06:20:11.226833 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-ca-certs\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227370 kubelet[2893]: I0128 06:20:11.226860 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-flexvolume-dir\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227630 kubelet[2893]: I0128 06:20:11.226887 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-k8s-certs\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227630 kubelet[2893]: I0128 06:20:11.226946 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a89b4af11c8fc97582fad12fa408085e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-4e3e3.gb1.brightbox.com\" (UID: \"a89b4af11c8fc97582fad12fa408085e\") " pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.227630 kubelet[2893]: I0128 06:20:11.226978 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf4ffd207194f0325e349988fb3c86ea-kubeconfig\") pod \"kube-scheduler-srv-4e3e3.gb1.brightbox.com\" (UID: \"bf4ffd207194f0325e349988fb3c86ea\") " pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.431912 sudo[2907]: pam_unix(sudo:session): session closed for user root Jan 28 06:20:11.716838 kubelet[2893]: I0128 06:20:11.715114 2893 apiserver.go:52] "Watching apiserver" Jan 28 06:20:11.806919 kubelet[2893]: I0128 06:20:11.806829 2893 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 06:20:11.948396 kubelet[2893]: I0128 06:20:11.948356 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.962340 kubelet[2893]: I0128 06:20:11.961828 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:20:11.962340 kubelet[2893]: E0128 06:20:11.961889 2893 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-4e3e3.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" Jan 28 06:20:11.984350 kubelet[2893]: I0128 06:20:11.983975 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-4e3e3.gb1.brightbox.com" podStartSLOduration=0.983770286 podStartE2EDuration="983.770286ms" podCreationTimestamp="2026-01-28 06:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:20:11.982836059 +0000 UTC m=+1.460406870" watchObservedRunningTime="2026-01-28 06:20:11.983770286 +0000 UTC m=+1.461341071" Jan 28 06:20:12.005844 kubelet[2893]: I0128 06:20:12.005632 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-4e3e3.gb1.brightbox.com" podStartSLOduration=4.005567428 podStartE2EDuration="4.005567428s" podCreationTimestamp="2026-01-28 06:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:20:12.004397599 +0000 UTC m=+1.481968402" watchObservedRunningTime="2026-01-28 06:20:12.005567428 +0000 UTC m=+1.483138228" Jan 28 06:20:12.021996 kubelet[2893]: I0128 06:20:12.021830 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-4e3e3.gb1.brightbox.com" podStartSLOduration=1.021814145 podStartE2EDuration="1.021814145s" podCreationTimestamp="2026-01-28 06:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:20:12.021719701 +0000 UTC m=+1.499290516" watchObservedRunningTime="2026-01-28 06:20:12.021814145 +0000 UTC m=+1.499384930" Jan 28 06:20:13.212893 sudo[1883]: pam_unix(sudo:session): session closed for user root Jan 28 06:20:13.302889 sshd[1882]: Connection closed by 68.220.241.50 port 45748 Jan 28 06:20:13.305052 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:13.311573 systemd[1]: sshd@8-10.230.78.222:22-68.220.241.50:45748.service: Deactivated successfully. Jan 28 06:20:13.312174 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Jan 28 06:20:13.316306 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 06:20:13.316657 systemd[1]: session-11.scope: Consumed 7.199s CPU time, 214.5M memory peak. Jan 28 06:20:13.322315 systemd-logind[1552]: Removed session 11. Jan 28 06:20:14.558832 kubelet[2893]: I0128 06:20:14.558765 2893 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 06:20:14.559912 containerd[1577]: time="2026-01-28T06:20:14.559654746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 06:20:14.560523 kubelet[2893]: I0128 06:20:14.560029 2893 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 06:20:15.549277 systemd[1]: Created slice kubepods-besteffort-podca067f24_4416_4445_9bdd_8f631790158b.slice - libcontainer container kubepods-besteffort-podca067f24_4416_4445_9bdd_8f631790158b.slice. Jan 28 06:20:15.571675 systemd[1]: Created slice kubepods-burstable-pode3fce5c6_42b1_47a6_8aba_c0df5ac758aa.slice - libcontainer container kubepods-burstable-pode3fce5c6_42b1_47a6_8aba_c0df5ac758aa.slice. Jan 28 06:20:15.663802 kubelet[2893]: I0128 06:20:15.663737 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca067f24-4416-4445-9bdd-8f631790158b-lib-modules\") pod \"kube-proxy-w5qw9\" (UID: \"ca067f24-4416-4445-9bdd-8f631790158b\") " pod="kube-system/kube-proxy-w5qw9" Jan 28 06:20:15.664358 kubelet[2893]: I0128 06:20:15.663813 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca067f24-4416-4445-9bdd-8f631790158b-kube-proxy\") pod \"kube-proxy-w5qw9\" (UID: \"ca067f24-4416-4445-9bdd-8f631790158b\") " pod="kube-system/kube-proxy-w5qw9" Jan 28 06:20:15.664358 kubelet[2893]: I0128 06:20:15.663868 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-cgroup\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.664358 kubelet[2893]: I0128 06:20:15.663896 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-etc-cni-netd\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.664358 kubelet[2893]: I0128 06:20:15.663968 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-run\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.664358 kubelet[2893]: I0128 06:20:15.664041 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-lib-modules\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.664358 kubelet[2893]: I0128 06:20:15.664142 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-clustermesh-secrets\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665174 kubelet[2893]: I0128 06:20:15.664176 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-config-path\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665174 kubelet[2893]: I0128 06:20:15.664225 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-net\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665174 kubelet[2893]: I0128 06:20:15.664251 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-kernel\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665174 kubelet[2893]: I0128 06:20:15.664336 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvrz\" (UniqueName: \"kubernetes.io/projected/ca067f24-4416-4445-9bdd-8f631790158b-kube-api-access-5zvrz\") pod \"kube-proxy-w5qw9\" (UID: \"ca067f24-4416-4445-9bdd-8f631790158b\") " pod="kube-system/kube-proxy-w5qw9" Jan 28 06:20:15.665174 kubelet[2893]: I0128 06:20:15.664392 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca067f24-4416-4445-9bdd-8f631790158b-xtables-lock\") pod \"kube-proxy-w5qw9\" (UID: \"ca067f24-4416-4445-9bdd-8f631790158b\") " pod="kube-system/kube-proxy-w5qw9" Jan 28 06:20:15.665391 kubelet[2893]: I0128 06:20:15.664476 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hostproc\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665391 kubelet[2893]: I0128 06:20:15.664984 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cni-path\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665391 kubelet[2893]: I0128 06:20:15.665025 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-xtables-lock\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665997 kubelet[2893]: I0128 06:20:15.665054 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hubble-tls\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665997 kubelet[2893]: I0128 06:20:15.665769 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-bpf-maps\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.665997 kubelet[2893]: I0128 06:20:15.665828 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtw8\" (UniqueName: \"kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-kube-api-access-mxtw8\") pod \"cilium-tr58x\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " pod="kube-system/cilium-tr58x" Jan 28 06:20:15.694701 systemd[1]: Created slice kubepods-besteffort-podd9a5f7cf_ed9d_448d_b8f3_0aadae891adb.slice - libcontainer container kubepods-besteffort-podd9a5f7cf_ed9d_448d_b8f3_0aadae891adb.slice. Jan 28 06:20:15.703959 kubelet[2893]: I0128 06:20:15.703883 2893 status_manager.go:895] "Failed to get status for pod" podUID="d9a5f7cf-ed9d-448d-b8f3-0aadae891adb" pod="kube-system/cilium-operator-6c4d7847fc-wq2js" err="pods \"cilium-operator-6c4d7847fc-wq2js\" is forbidden: User \"system:node:srv-4e3e3.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-4e3e3.gb1.brightbox.com' and this object" Jan 28 06:20:15.767684 kubelet[2893]: I0128 06:20:15.767262 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr949\" (UniqueName: \"kubernetes.io/projected/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-kube-api-access-vr949\") pod \"cilium-operator-6c4d7847fc-wq2js\" (UID: \"d9a5f7cf-ed9d-448d-b8f3-0aadae891adb\") " pod="kube-system/cilium-operator-6c4d7847fc-wq2js" Jan 28 06:20:15.767684 kubelet[2893]: I0128 06:20:15.767400 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wq2js\" (UID: \"d9a5f7cf-ed9d-448d-b8f3-0aadae891adb\") " pod="kube-system/cilium-operator-6c4d7847fc-wq2js" Jan 28 06:20:15.864767 containerd[1577]: time="2026-01-28T06:20:15.863866041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w5qw9,Uid:ca067f24-4416-4445-9bdd-8f631790158b,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:15.887383 containerd[1577]: time="2026-01-28T06:20:15.885926306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tr58x,Uid:e3fce5c6-42b1-47a6-8aba-c0df5ac758aa,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:15.902205 containerd[1577]: time="2026-01-28T06:20:15.902129389Z" level=info msg="connecting to shim 1a892d4534a56b6f10f9977397392784b105e711c829d46f0f08ec29e7aef204" address="unix:///run/containerd/s/edffd02f9dad63e5b0ae8315da01ddc5142e41386998383ec7e903941822dd7d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:15.915816 containerd[1577]: time="2026-01-28T06:20:15.915690996Z" level=info msg="connecting to shim d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df" address="unix:///run/containerd/s/acabafff869c0da50b3d6aa8d51197d9e23df08c0b49b354549e6a3483b14484" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:15.946277 systemd[1]: Started cri-containerd-1a892d4534a56b6f10f9977397392784b105e711c829d46f0f08ec29e7aef204.scope - libcontainer container 1a892d4534a56b6f10f9977397392784b105e711c829d46f0f08ec29e7aef204. Jan 28 06:20:15.974468 systemd[1]: Started cri-containerd-d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df.scope - libcontainer container d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df. Jan 28 06:20:16.003390 containerd[1577]: time="2026-01-28T06:20:16.003309887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wq2js,Uid:d9a5f7cf-ed9d-448d-b8f3-0aadae891adb,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:16.034106 containerd[1577]: time="2026-01-28T06:20:16.033976958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w5qw9,Uid:ca067f24-4416-4445-9bdd-8f631790158b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a892d4534a56b6f10f9977397392784b105e711c829d46f0f08ec29e7aef204\"" Jan 28 06:20:16.049438 containerd[1577]: time="2026-01-28T06:20:16.049330660Z" level=info msg="connecting to shim 06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379" address="unix:///run/containerd/s/3fbc78955357c94590bd492720497129b09ce18694e5187f77d34846f283d14b" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:16.053635 containerd[1577]: time="2026-01-28T06:20:16.053530940Z" level=info msg="CreateContainer within sandbox \"1a892d4534a56b6f10f9977397392784b105e711c829d46f0f08ec29e7aef204\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 06:20:16.054397 containerd[1577]: time="2026-01-28T06:20:16.054365953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tr58x,Uid:e3fce5c6-42b1-47a6-8aba-c0df5ac758aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\"" Jan 28 06:20:16.057703 containerd[1577]: time="2026-01-28T06:20:16.057538641Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 06:20:16.073223 containerd[1577]: time="2026-01-28T06:20:16.073171437Z" level=info msg="Container 0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:16.084684 containerd[1577]: time="2026-01-28T06:20:16.084609560Z" level=info msg="CreateContainer within sandbox \"1a892d4534a56b6f10f9977397392784b105e711c829d46f0f08ec29e7aef204\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463\"" Jan 28 06:20:16.087360 containerd[1577]: time="2026-01-28T06:20:16.087204923Z" level=info msg="StartContainer for \"0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463\"" Jan 28 06:20:16.094538 containerd[1577]: time="2026-01-28T06:20:16.094494670Z" level=info msg="connecting to shim 0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463" address="unix:///run/containerd/s/edffd02f9dad63e5b0ae8315da01ddc5142e41386998383ec7e903941822dd7d" protocol=ttrpc version=3 Jan 28 06:20:16.104377 systemd[1]: Started cri-containerd-06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379.scope - libcontainer container 06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379. Jan 28 06:20:16.130255 systemd[1]: Started cri-containerd-0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463.scope - libcontainer container 0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463. Jan 28 06:20:16.198131 containerd[1577]: time="2026-01-28T06:20:16.198026889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wq2js,Uid:d9a5f7cf-ed9d-448d-b8f3-0aadae891adb,Namespace:kube-system,Attempt:0,} returns sandbox id \"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\"" Jan 28 06:20:16.242371 containerd[1577]: time="2026-01-28T06:20:16.242326999Z" level=info msg="StartContainer for \"0e8f18ee6ae036a76895389a9c841497bc22ff730da079160b6924dc1a5f7463\" returns successfully" Jan 28 06:20:16.990381 kubelet[2893]: I0128 06:20:16.988788 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w5qw9" podStartSLOduration=1.988749085 podStartE2EDuration="1.988749085s" podCreationTimestamp="2026-01-28 06:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:20:16.987829967 +0000 UTC m=+6.465400772" watchObservedRunningTime="2026-01-28 06:20:16.988749085 +0000 UTC m=+6.466319859" Jan 28 06:20:22.532030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636666843.mount: Deactivated successfully. Jan 28 06:20:25.711556 containerd[1577]: time="2026-01-28T06:20:25.711432379Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:20:25.713245 containerd[1577]: time="2026-01-28T06:20:25.713210984Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 28 06:20:25.713939 containerd[1577]: time="2026-01-28T06:20:25.713851751Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:20:25.739050 containerd[1577]: time="2026-01-28T06:20:25.738910094Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.681323352s" Jan 28 06:20:25.739050 containerd[1577]: time="2026-01-28T06:20:25.738961172Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 28 06:20:25.741273 containerd[1577]: time="2026-01-28T06:20:25.741222010Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 06:20:25.746300 containerd[1577]: time="2026-01-28T06:20:25.746252444Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 06:20:25.766197 containerd[1577]: time="2026-01-28T06:20:25.764080747Z" level=info msg="Container 5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:25.770315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020821250.mount: Deactivated successfully. Jan 28 06:20:25.780523 containerd[1577]: time="2026-01-28T06:20:25.780352491Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\"" Jan 28 06:20:25.781302 containerd[1577]: time="2026-01-28T06:20:25.781257500Z" level=info msg="StartContainer for \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\"" Jan 28 06:20:25.782592 containerd[1577]: time="2026-01-28T06:20:25.782550576Z" level=info msg="connecting to shim 5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687" address="unix:///run/containerd/s/acabafff869c0da50b3d6aa8d51197d9e23df08c0b49b354549e6a3483b14484" protocol=ttrpc version=3 Jan 28 06:20:25.856294 systemd[1]: Started cri-containerd-5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687.scope - libcontainer container 5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687. Jan 28 06:20:25.903605 containerd[1577]: time="2026-01-28T06:20:25.903561145Z" level=info msg="StartContainer for \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\" returns successfully" Jan 28 06:20:25.922820 systemd[1]: cri-containerd-5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687.scope: Deactivated successfully. Jan 28 06:20:25.948453 containerd[1577]: time="2026-01-28T06:20:25.948243989Z" level=info msg="received container exit event container_id:\"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\" id:\"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\" pid:3314 exited_at:{seconds:1769581225 nanos:927838961}" Jan 28 06:20:25.989105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687-rootfs.mount: Deactivated successfully. Jan 28 06:20:27.044888 containerd[1577]: time="2026-01-28T06:20:27.044039812Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 06:20:27.059085 containerd[1577]: time="2026-01-28T06:20:27.056561611Z" level=info msg="Container e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:27.080577 containerd[1577]: time="2026-01-28T06:20:27.080513993Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\"" Jan 28 06:20:27.083323 containerd[1577]: time="2026-01-28T06:20:27.083240996Z" level=info msg="StartContainer for \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\"" Jan 28 06:20:27.085178 containerd[1577]: time="2026-01-28T06:20:27.085046136Z" level=info msg="connecting to shim e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0" address="unix:///run/containerd/s/acabafff869c0da50b3d6aa8d51197d9e23df08c0b49b354549e6a3483b14484" protocol=ttrpc version=3 Jan 28 06:20:27.137327 systemd[1]: Started cri-containerd-e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0.scope - libcontainer container e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0. Jan 28 06:20:27.191269 containerd[1577]: time="2026-01-28T06:20:27.191203245Z" level=info msg="StartContainer for \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\" returns successfully" Jan 28 06:20:27.218052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 06:20:27.218789 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:20:27.219546 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:20:27.223668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:20:27.227399 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 06:20:27.228196 systemd[1]: cri-containerd-e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0.scope: Deactivated successfully. Jan 28 06:20:27.236084 containerd[1577]: time="2026-01-28T06:20:27.235241018Z" level=info msg="received container exit event container_id:\"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\" id:\"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\" pid:3358 exited_at:{seconds:1769581227 nanos:234301504}" Jan 28 06:20:27.260738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:20:28.060136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0-rootfs.mount: Deactivated successfully. Jan 28 06:20:28.067488 containerd[1577]: time="2026-01-28T06:20:28.067431682Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 06:20:28.095145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119174731.mount: Deactivated successfully. Jan 28 06:20:28.097719 containerd[1577]: time="2026-01-28T06:20:28.095468191Z" level=info msg="Container 97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:28.145776 containerd[1577]: time="2026-01-28T06:20:28.145629491Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\"" Jan 28 06:20:28.149790 containerd[1577]: time="2026-01-28T06:20:28.149745608Z" level=info msg="StartContainer for \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\"" Jan 28 06:20:28.163324 containerd[1577]: time="2026-01-28T06:20:28.163125675Z" level=info msg="connecting to shim 97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3" address="unix:///run/containerd/s/acabafff869c0da50b3d6aa8d51197d9e23df08c0b49b354549e6a3483b14484" protocol=ttrpc version=3 Jan 28 06:20:28.212268 systemd[1]: Started cri-containerd-97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3.scope - libcontainer container 97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3. Jan 28 06:20:28.338524 containerd[1577]: time="2026-01-28T06:20:28.338394955Z" level=info msg="StartContainer for \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\" returns successfully" Jan 28 06:20:28.345675 systemd[1]: cri-containerd-97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3.scope: Deactivated successfully. Jan 28 06:20:28.346871 systemd[1]: cri-containerd-97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3.scope: Consumed 52ms CPU time, 5.9M memory peak, 1M read from disk. Jan 28 06:20:28.350044 containerd[1577]: time="2026-01-28T06:20:28.349986211Z" level=info msg="received container exit event container_id:\"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\" id:\"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\" pid:3420 exited_at:{seconds:1769581228 nanos:349524173}" Jan 28 06:20:28.396354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3-rootfs.mount: Deactivated successfully. Jan 28 06:20:28.815145 containerd[1577]: time="2026-01-28T06:20:28.815058093Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:20:28.816849 containerd[1577]: time="2026-01-28T06:20:28.816818008Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 28 06:20:28.818255 containerd[1577]: time="2026-01-28T06:20:28.818178651Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:20:28.821092 containerd[1577]: time="2026-01-28T06:20:28.820927016Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.079467057s" Jan 28 06:20:28.821092 containerd[1577]: time="2026-01-28T06:20:28.820975410Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 28 06:20:28.826661 containerd[1577]: time="2026-01-28T06:20:28.826615702Z" level=info msg="CreateContainer within sandbox \"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 06:20:28.857083 containerd[1577]: time="2026-01-28T06:20:28.857016403Z" level=info msg="Container d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:28.866183 containerd[1577]: time="2026-01-28T06:20:28.866071933Z" level=info msg="CreateContainer within sandbox \"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\"" Jan 28 06:20:28.867452 containerd[1577]: time="2026-01-28T06:20:28.867310693Z" level=info msg="StartContainer for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\"" Jan 28 06:20:28.870688 containerd[1577]: time="2026-01-28T06:20:28.870652965Z" level=info msg="connecting to shim d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545" address="unix:///run/containerd/s/3fbc78955357c94590bd492720497129b09ce18694e5187f77d34846f283d14b" protocol=ttrpc version=3 Jan 28 06:20:28.899270 systemd[1]: Started cri-containerd-d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545.scope - libcontainer container d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545. Jan 28 06:20:28.948763 containerd[1577]: time="2026-01-28T06:20:28.948720229Z" level=info msg="StartContainer for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" returns successfully" Jan 28 06:20:29.070519 containerd[1577]: time="2026-01-28T06:20:29.070389807Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 06:20:29.091174 containerd[1577]: time="2026-01-28T06:20:29.091119982Z" level=info msg="Container 974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:29.099817 containerd[1577]: time="2026-01-28T06:20:29.099765911Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\"" Jan 28 06:20:29.104223 containerd[1577]: time="2026-01-28T06:20:29.102751910Z" level=info msg="StartContainer for \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\"" Jan 28 06:20:29.106619 containerd[1577]: time="2026-01-28T06:20:29.105889303Z" level=info msg="connecting to shim 974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f" address="unix:///run/containerd/s/acabafff869c0da50b3d6aa8d51197d9e23df08c0b49b354549e6a3483b14484" protocol=ttrpc version=3 Jan 28 06:20:29.156407 systemd[1]: Started cri-containerd-974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f.scope - libcontainer container 974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f. Jan 28 06:20:29.229329 systemd[1]: cri-containerd-974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f.scope: Deactivated successfully. Jan 28 06:20:29.232425 containerd[1577]: time="2026-01-28T06:20:29.229269789Z" level=info msg="received container exit event container_id:\"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\" id:\"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\" pid:3495 exited_at:{seconds:1769581229 nanos:228424943}" Jan 28 06:20:29.235607 containerd[1577]: time="2026-01-28T06:20:29.235571917Z" level=info msg="StartContainer for \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\" returns successfully" Jan 28 06:20:29.288534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f-rootfs.mount: Deactivated successfully. Jan 28 06:20:30.083877 containerd[1577]: time="2026-01-28T06:20:30.083756079Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 06:20:30.132713 containerd[1577]: time="2026-01-28T06:20:30.131236423Z" level=info msg="Container aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:30.133782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408567968.mount: Deactivated successfully. Jan 28 06:20:30.146182 containerd[1577]: time="2026-01-28T06:20:30.145197855Z" level=info msg="CreateContainer within sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\"" Jan 28 06:20:30.148978 containerd[1577]: time="2026-01-28T06:20:30.147785329Z" level=info msg="StartContainer for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\"" Jan 28 06:20:30.151101 containerd[1577]: time="2026-01-28T06:20:30.150715237Z" level=info msg="connecting to shim aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a" address="unix:///run/containerd/s/acabafff869c0da50b3d6aa8d51197d9e23df08c0b49b354549e6a3483b14484" protocol=ttrpc version=3 Jan 28 06:20:30.153263 kubelet[2893]: I0128 06:20:30.153159 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wq2js" podStartSLOduration=2.530314325 podStartE2EDuration="15.15204641s" podCreationTimestamp="2026-01-28 06:20:15 +0000 UTC" firstStartedPulling="2026-01-28 06:20:16.200466726 +0000 UTC m=+5.678037505" lastFinishedPulling="2026-01-28 06:20:28.822198804 +0000 UTC m=+18.299769590" observedRunningTime="2026-01-28 06:20:29.146437949 +0000 UTC m=+18.624008748" watchObservedRunningTime="2026-01-28 06:20:30.15204641 +0000 UTC m=+19.629617190" Jan 28 06:20:30.200356 systemd[1]: Started cri-containerd-aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a.scope - libcontainer container aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a. Jan 28 06:20:30.300124 containerd[1577]: time="2026-01-28T06:20:30.298280971Z" level=info msg="StartContainer for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" returns successfully" Jan 28 06:20:30.536632 kubelet[2893]: I0128 06:20:30.536511 2893 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 06:20:30.599015 systemd[1]: Created slice kubepods-burstable-pod4a587878_18ad_42b5_a252_d11dc17879cc.slice - libcontainer container kubepods-burstable-pod4a587878_18ad_42b5_a252_d11dc17879cc.slice. Jan 28 06:20:30.609089 systemd[1]: Created slice kubepods-burstable-poda945fe3f_0205_4441_b12b_9b50ca3124a2.slice - libcontainer container kubepods-burstable-poda945fe3f_0205_4441_b12b_9b50ca3124a2.slice. Jan 28 06:20:30.676401 kubelet[2893]: I0128 06:20:30.676198 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a945fe3f-0205-4441-b12b-9b50ca3124a2-config-volume\") pod \"coredns-674b8bbfcf-rq6w5\" (UID: \"a945fe3f-0205-4441-b12b-9b50ca3124a2\") " pod="kube-system/coredns-674b8bbfcf-rq6w5" Jan 28 06:20:30.676401 kubelet[2893]: I0128 06:20:30.676281 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a587878-18ad-42b5-a252-d11dc17879cc-config-volume\") pod \"coredns-674b8bbfcf-vstnc\" (UID: \"4a587878-18ad-42b5-a252-d11dc17879cc\") " pod="kube-system/coredns-674b8bbfcf-vstnc" Jan 28 06:20:30.676637 kubelet[2893]: I0128 06:20:30.676385 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbjr\" (UniqueName: \"kubernetes.io/projected/4a587878-18ad-42b5-a252-d11dc17879cc-kube-api-access-7gbjr\") pod \"coredns-674b8bbfcf-vstnc\" (UID: \"4a587878-18ad-42b5-a252-d11dc17879cc\") " pod="kube-system/coredns-674b8bbfcf-vstnc" Jan 28 06:20:30.676637 kubelet[2893]: I0128 06:20:30.676521 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw77t\" (UniqueName: \"kubernetes.io/projected/a945fe3f-0205-4441-b12b-9b50ca3124a2-kube-api-access-pw77t\") pod \"coredns-674b8bbfcf-rq6w5\" (UID: \"a945fe3f-0205-4441-b12b-9b50ca3124a2\") " pod="kube-system/coredns-674b8bbfcf-rq6w5" Jan 28 06:20:30.910303 containerd[1577]: time="2026-01-28T06:20:30.910017834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vstnc,Uid:4a587878-18ad-42b5-a252-d11dc17879cc,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:30.916141 containerd[1577]: time="2026-01-28T06:20:30.915907197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rq6w5,Uid:a945fe3f-0205-4441-b12b-9b50ca3124a2,Namespace:kube-system,Attempt:0,}" Jan 28 06:20:31.152387 kubelet[2893]: I0128 06:20:31.152274 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tr58x" podStartSLOduration=6.468789312 podStartE2EDuration="16.152243419s" podCreationTimestamp="2026-01-28 06:20:15 +0000 UTC" firstStartedPulling="2026-01-28 06:20:16.056897388 +0000 UTC m=+5.534468175" lastFinishedPulling="2026-01-28 06:20:25.740351496 +0000 UTC m=+15.217922282" observedRunningTime="2026-01-28 06:20:31.145936253 +0000 UTC m=+20.623507051" watchObservedRunningTime="2026-01-28 06:20:31.152243419 +0000 UTC m=+20.629814205" Jan 28 06:20:33.363291 systemd-networkd[1494]: cilium_host: Link UP Jan 28 06:20:33.363586 systemd-networkd[1494]: cilium_net: Link UP Jan 28 06:20:33.364606 systemd-networkd[1494]: cilium_net: Gained carrier Jan 28 06:20:33.365694 systemd-networkd[1494]: cilium_host: Gained carrier Jan 28 06:20:33.564056 systemd-networkd[1494]: cilium_vxlan: Link UP Jan 28 06:20:33.564086 systemd-networkd[1494]: cilium_vxlan: Gained carrier Jan 28 06:20:33.708437 systemd-networkd[1494]: cilium_net: Gained IPv6LL Jan 28 06:20:34.120134 kernel: NET: Registered PF_ALG protocol family Jan 28 06:20:34.227573 systemd-networkd[1494]: cilium_host: Gained IPv6LL Jan 28 06:20:34.675429 systemd-networkd[1494]: cilium_vxlan: Gained IPv6LL Jan 28 06:20:35.219069 systemd-networkd[1494]: lxc_health: Link UP Jan 28 06:20:35.231196 systemd-networkd[1494]: lxc_health: Gained carrier Jan 28 06:20:35.527147 systemd-networkd[1494]: lxc778ec0f0917a: Link UP Jan 28 06:20:35.548106 kernel: eth0: renamed from tmp036cb Jan 28 06:20:35.556845 systemd-networkd[1494]: lxc8c9e58599377: Link UP Jan 28 06:20:35.568806 systemd-networkd[1494]: lxc778ec0f0917a: Gained carrier Jan 28 06:20:35.572227 kernel: eth0: renamed from tmp6a8eb Jan 28 06:20:35.576591 systemd-networkd[1494]: lxc8c9e58599377: Gained carrier Jan 28 06:20:36.531479 systemd-networkd[1494]: lxc_health: Gained IPv6LL Jan 28 06:20:36.787326 systemd-networkd[1494]: lxc8c9e58599377: Gained IPv6LL Jan 28 06:20:37.555344 systemd-networkd[1494]: lxc778ec0f0917a: Gained IPv6LL Jan 28 06:20:41.218329 containerd[1577]: time="2026-01-28T06:20:41.216816470Z" level=info msg="connecting to shim 6a8eb45dc83ddc5ecc16406426b2de89256a2e3f1f0ec53383b2daef113bad3a" address="unix:///run/containerd/s/99ff606c8e332dd280a00c905237605f427a5820b4590ef8fc05c908c82db769" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:41.283448 containerd[1577]: time="2026-01-28T06:20:41.283355930Z" level=info msg="connecting to shim 036cbbb1f6df277c59b164b93e082864200583bf1a63ebdf7423f53c29524ff7" address="unix:///run/containerd/s/7eaa8ea05581bb5e8e383be807fa2a6a7d6e63e9dd00ca9120491e55114c70f1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:20:41.348360 systemd[1]: Started cri-containerd-6a8eb45dc83ddc5ecc16406426b2de89256a2e3f1f0ec53383b2daef113bad3a.scope - libcontainer container 6a8eb45dc83ddc5ecc16406426b2de89256a2e3f1f0ec53383b2daef113bad3a. Jan 28 06:20:41.395399 systemd[1]: Started cri-containerd-036cbbb1f6df277c59b164b93e082864200583bf1a63ebdf7423f53c29524ff7.scope - libcontainer container 036cbbb1f6df277c59b164b93e082864200583bf1a63ebdf7423f53c29524ff7. Jan 28 06:20:41.491800 containerd[1577]: time="2026-01-28T06:20:41.491461259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vstnc,Uid:4a587878-18ad-42b5-a252-d11dc17879cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a8eb45dc83ddc5ecc16406426b2de89256a2e3f1f0ec53383b2daef113bad3a\"" Jan 28 06:20:41.492748 containerd[1577]: time="2026-01-28T06:20:41.492689255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rq6w5,Uid:a945fe3f-0205-4441-b12b-9b50ca3124a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"036cbbb1f6df277c59b164b93e082864200583bf1a63ebdf7423f53c29524ff7\"" Jan 28 06:20:41.499404 containerd[1577]: time="2026-01-28T06:20:41.499342079Z" level=info msg="CreateContainer within sandbox \"036cbbb1f6df277c59b164b93e082864200583bf1a63ebdf7423f53c29524ff7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 06:20:41.500123 containerd[1577]: time="2026-01-28T06:20:41.500090751Z" level=info msg="CreateContainer within sandbox \"6a8eb45dc83ddc5ecc16406426b2de89256a2e3f1f0ec53383b2daef113bad3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 06:20:41.522743 containerd[1577]: time="2026-01-28T06:20:41.522696343Z" level=info msg="Container 2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:41.526619 containerd[1577]: time="2026-01-28T06:20:41.526588207Z" level=info msg="Container d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:20:41.535078 containerd[1577]: time="2026-01-28T06:20:41.534975983Z" level=info msg="CreateContainer within sandbox \"6a8eb45dc83ddc5ecc16406426b2de89256a2e3f1f0ec53383b2daef113bad3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e\"" Jan 28 06:20:41.537585 containerd[1577]: time="2026-01-28T06:20:41.537542820Z" level=info msg="StartContainer for \"2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e\"" Jan 28 06:20:41.541694 containerd[1577]: time="2026-01-28T06:20:41.541603668Z" level=info msg="connecting to shim 2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e" address="unix:///run/containerd/s/99ff606c8e332dd280a00c905237605f427a5820b4590ef8fc05c908c82db769" protocol=ttrpc version=3 Jan 28 06:20:41.547523 containerd[1577]: time="2026-01-28T06:20:41.547486327Z" level=info msg="CreateContainer within sandbox \"036cbbb1f6df277c59b164b93e082864200583bf1a63ebdf7423f53c29524ff7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19\"" Jan 28 06:20:41.549689 containerd[1577]: time="2026-01-28T06:20:41.548311966Z" level=info msg="StartContainer for \"d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19\"" Jan 28 06:20:41.549689 containerd[1577]: time="2026-01-28T06:20:41.549485321Z" level=info msg="connecting to shim d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19" address="unix:///run/containerd/s/7eaa8ea05581bb5e8e383be807fa2a6a7d6e63e9dd00ca9120491e55114c70f1" protocol=ttrpc version=3 Jan 28 06:20:41.582392 systemd[1]: Started cri-containerd-d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19.scope - libcontainer container d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19. Jan 28 06:20:41.592310 systemd[1]: Started cri-containerd-2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e.scope - libcontainer container 2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e. Jan 28 06:20:41.679682 containerd[1577]: time="2026-01-28T06:20:41.679270064Z" level=info msg="StartContainer for \"d76b7e469f91c599db62ab889a37e25d870fa1d9bf993b40180543a48e615f19\" returns successfully" Jan 28 06:20:41.679682 containerd[1577]: time="2026-01-28T06:20:41.679601892Z" level=info msg="StartContainer for \"2ae07cd6d8ed7706c5a1624b97f8ca64eacc4f256847e80c92867752f6b8237e\" returns successfully" Jan 28 06:20:42.193876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135003262.mount: Deactivated successfully. Jan 28 06:20:42.210098 kubelet[2893]: I0128 06:20:42.204147 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vstnc" podStartSLOduration=27.204048842 podStartE2EDuration="27.204048842s" podCreationTimestamp="2026-01-28 06:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:20:42.197528068 +0000 UTC m=+31.675098869" watchObservedRunningTime="2026-01-28 06:20:42.204048842 +0000 UTC m=+31.681619622" Jan 28 06:20:42.261803 kubelet[2893]: I0128 06:20:42.261443 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rq6w5" podStartSLOduration=27.261423586 podStartE2EDuration="27.261423586s" podCreationTimestamp="2026-01-28 06:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:20:42.261153167 +0000 UTC m=+31.738723970" watchObservedRunningTime="2026-01-28 06:20:42.261423586 +0000 UTC m=+31.738994373" Jan 28 06:21:27.623969 systemd[1]: Started sshd@9-10.230.78.222:22-68.220.241.50:57832.service - OpenSSH per-connection server daemon (68.220.241.50:57832). Jan 28 06:21:28.252114 sshd[4222]: Accepted publickey for core from 68.220.241.50 port 57832 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:28.255699 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:28.284857 systemd-logind[1552]: New session 12 of user core. Jan 28 06:21:28.292316 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 06:21:29.219091 sshd[4225]: Connection closed by 68.220.241.50 port 57832 Jan 28 06:21:29.220997 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:29.238999 systemd[1]: sshd@9-10.230.78.222:22-68.220.241.50:57832.service: Deactivated successfully. Jan 28 06:21:29.242814 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 06:21:29.248137 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Jan 28 06:21:29.252272 systemd-logind[1552]: Removed session 12. Jan 28 06:21:34.331421 systemd[1]: Started sshd@10-10.230.78.222:22-68.220.241.50:35440.service - OpenSSH per-connection server daemon (68.220.241.50:35440). Jan 28 06:21:34.918130 sshd[4238]: Accepted publickey for core from 68.220.241.50 port 35440 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:34.920103 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:34.927831 systemd-logind[1552]: New session 13 of user core. Jan 28 06:21:34.941304 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 06:21:35.435129 sshd[4241]: Connection closed by 68.220.241.50 port 35440 Jan 28 06:21:35.436020 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:35.441909 systemd[1]: sshd@10-10.230.78.222:22-68.220.241.50:35440.service: Deactivated successfully. Jan 28 06:21:35.448812 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 06:21:35.453581 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Jan 28 06:21:35.457334 systemd-logind[1552]: Removed session 13. Jan 28 06:21:40.550048 systemd[1]: Started sshd@11-10.230.78.222:22-68.220.241.50:35454.service - OpenSSH per-connection server daemon (68.220.241.50:35454). Jan 28 06:21:41.160030 sshd[4256]: Accepted publickey for core from 68.220.241.50 port 35454 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:41.162356 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:41.170211 systemd-logind[1552]: New session 14 of user core. Jan 28 06:21:41.178354 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 06:21:41.698513 sshd[4259]: Connection closed by 68.220.241.50 port 35454 Jan 28 06:21:41.699753 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:41.706392 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Jan 28 06:21:41.708191 systemd[1]: sshd@11-10.230.78.222:22-68.220.241.50:35454.service: Deactivated successfully. Jan 28 06:21:41.711329 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 06:21:41.713487 systemd-logind[1552]: Removed session 14. Jan 28 06:21:46.617645 update_engine[1554]: I20260128 06:21:46.617498 1554 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 06:21:46.617645 update_engine[1554]: I20260128 06:21:46.617624 1554 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 06:21:46.621391 update_engine[1554]: I20260128 06:21:46.621344 1554 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 06:21:46.622395 update_engine[1554]: I20260128 06:21:46.622346 1554 omaha_request_params.cc:62] Current group set to stable Jan 28 06:21:46.622891 update_engine[1554]: I20260128 06:21:46.622620 1554 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 06:21:46.622891 update_engine[1554]: I20260128 06:21:46.622646 1554 update_attempter.cc:643] Scheduling an action processor start. Jan 28 06:21:46.622891 update_engine[1554]: I20260128 06:21:46.622694 1554 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 06:21:46.622891 update_engine[1554]: I20260128 06:21:46.622787 1554 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 06:21:46.623236 update_engine[1554]: I20260128 06:21:46.622905 1554 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 06:21:46.623236 update_engine[1554]: I20260128 06:21:46.622925 1554 omaha_request_action.cc:272] Request: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: Jan 28 06:21:46.623236 update_engine[1554]: I20260128 06:21:46.622946 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:21:46.628646 update_engine[1554]: I20260128 06:21:46.627488 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:21:46.628762 update_engine[1554]: I20260128 06:21:46.628691 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:21:46.649792 locksmithd[1588]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 06:21:46.650523 update_engine[1554]: E20260128 06:21:46.650467 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 06:21:46.650629 update_engine[1554]: I20260128 06:21:46.650597 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 06:21:46.802854 systemd[1]: Started sshd@12-10.230.78.222:22-68.220.241.50:47742.service - OpenSSH per-connection server daemon (68.220.241.50:47742). Jan 28 06:21:47.403427 sshd[4272]: Accepted publickey for core from 68.220.241.50 port 47742 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:47.405836 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:47.415170 systemd-logind[1552]: New session 15 of user core. Jan 28 06:21:47.421299 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 06:21:47.911128 sshd[4277]: Connection closed by 68.220.241.50 port 47742 Jan 28 06:21:47.912660 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:47.920637 systemd[1]: sshd@12-10.230.78.222:22-68.220.241.50:47742.service: Deactivated successfully. Jan 28 06:21:47.923570 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 06:21:47.925121 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Jan 28 06:21:47.928214 systemd-logind[1552]: Removed session 15. Jan 28 06:21:48.021443 systemd[1]: Started sshd@13-10.230.78.222:22-68.220.241.50:47752.service - OpenSSH per-connection server daemon (68.220.241.50:47752). Jan 28 06:21:48.614917 sshd[4290]: Accepted publickey for core from 68.220.241.50 port 47752 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:48.617356 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:48.626370 systemd-logind[1552]: New session 16 of user core. Jan 28 06:21:48.635337 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 06:21:49.200526 sshd[4293]: Connection closed by 68.220.241.50 port 47752 Jan 28 06:21:49.201843 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:49.211655 systemd[1]: sshd@13-10.230.78.222:22-68.220.241.50:47752.service: Deactivated successfully. Jan 28 06:21:49.215884 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 06:21:49.219280 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Jan 28 06:21:49.222136 systemd-logind[1552]: Removed session 16. Jan 28 06:21:49.305859 systemd[1]: Started sshd@14-10.230.78.222:22-68.220.241.50:47764.service - OpenSSH per-connection server daemon (68.220.241.50:47764). Jan 28 06:21:49.905153 sshd[4303]: Accepted publickey for core from 68.220.241.50 port 47764 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:49.907213 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:49.915148 systemd-logind[1552]: New session 17 of user core. Jan 28 06:21:49.923364 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 06:21:50.415465 sshd[4306]: Connection closed by 68.220.241.50 port 47764 Jan 28 06:21:50.417373 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:50.424834 systemd[1]: sshd@14-10.230.78.222:22-68.220.241.50:47764.service: Deactivated successfully. Jan 28 06:21:50.428188 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 06:21:50.431661 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Jan 28 06:21:50.433396 systemd-logind[1552]: Removed session 17. Jan 28 06:21:55.522311 systemd[1]: Started sshd@15-10.230.78.222:22-68.220.241.50:51578.service - OpenSSH per-connection server daemon (68.220.241.50:51578). Jan 28 06:21:56.116883 sshd[4318]: Accepted publickey for core from 68.220.241.50 port 51578 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:21:56.119528 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:21:56.127161 systemd-logind[1552]: New session 18 of user core. Jan 28 06:21:56.138292 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 06:21:56.572248 update_engine[1554]: I20260128 06:21:56.572165 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:21:56.573054 update_engine[1554]: I20260128 06:21:56.572291 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:21:56.573054 update_engine[1554]: I20260128 06:21:56.572795 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:21:56.575086 update_engine[1554]: E20260128 06:21:56.574167 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 06:21:56.575086 update_engine[1554]: I20260128 06:21:56.574258 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 06:21:56.605705 sshd[4321]: Connection closed by 68.220.241.50 port 51578 Jan 28 06:21:56.605583 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jan 28 06:21:56.612579 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Jan 28 06:21:56.612973 systemd[1]: sshd@15-10.230.78.222:22-68.220.241.50:51578.service: Deactivated successfully. Jan 28 06:21:56.616339 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 06:21:56.618668 systemd-logind[1552]: Removed session 18. Jan 28 06:22:01.706604 systemd[1]: Started sshd@16-10.230.78.222:22-68.220.241.50:51580.service - OpenSSH per-connection server daemon (68.220.241.50:51580). Jan 28 06:22:02.289606 sshd[4332]: Accepted publickey for core from 68.220.241.50 port 51580 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:02.291547 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:02.298570 systemd-logind[1552]: New session 19 of user core. Jan 28 06:22:02.307407 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 06:22:02.792110 sshd[4335]: Connection closed by 68.220.241.50 port 51580 Jan 28 06:22:02.794655 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:02.801979 systemd[1]: sshd@16-10.230.78.222:22-68.220.241.50:51580.service: Deactivated successfully. Jan 28 06:22:02.805394 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 06:22:02.808212 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Jan 28 06:22:02.810576 systemd-logind[1552]: Removed session 19. Jan 28 06:22:02.899685 systemd[1]: Started sshd@17-10.230.78.222:22-68.220.241.50:54242.service - OpenSSH per-connection server daemon (68.220.241.50:54242). Jan 28 06:22:03.496041 sshd[4347]: Accepted publickey for core from 68.220.241.50 port 54242 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:03.498006 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:03.505548 systemd-logind[1552]: New session 20 of user core. Jan 28 06:22:03.515334 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 06:22:04.291819 sshd[4350]: Connection closed by 68.220.241.50 port 54242 Jan 28 06:22:04.292810 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:04.301304 systemd[1]: sshd@17-10.230.78.222:22-68.220.241.50:54242.service: Deactivated successfully. Jan 28 06:22:04.304348 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 06:22:04.308055 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Jan 28 06:22:04.310420 systemd-logind[1552]: Removed session 20. Jan 28 06:22:04.392161 systemd[1]: Started sshd@18-10.230.78.222:22-68.220.241.50:54248.service - OpenSSH per-connection server daemon (68.220.241.50:54248). Jan 28 06:22:04.994182 sshd[4360]: Accepted publickey for core from 68.220.241.50 port 54248 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:04.995641 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:05.006080 systemd-logind[1552]: New session 21 of user core. Jan 28 06:22:05.012428 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 06:22:06.203230 sshd[4363]: Connection closed by 68.220.241.50 port 54248 Jan 28 06:22:06.203735 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:06.210179 systemd[1]: sshd@18-10.230.78.222:22-68.220.241.50:54248.service: Deactivated successfully. Jan 28 06:22:06.213511 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 06:22:06.216165 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Jan 28 06:22:06.218263 systemd-logind[1552]: Removed session 21. Jan 28 06:22:06.308986 systemd[1]: Started sshd@19-10.230.78.222:22-68.220.241.50:54250.service - OpenSSH per-connection server daemon (68.220.241.50:54250). Jan 28 06:22:06.573437 update_engine[1554]: I20260128 06:22:06.572149 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:22:06.573437 update_engine[1554]: I20260128 06:22:06.572429 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:22:06.573437 update_engine[1554]: I20260128 06:22:06.573262 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:22:06.574609 update_engine[1554]: E20260128 06:22:06.574457 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 06:22:06.574609 update_engine[1554]: I20260128 06:22:06.574559 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 06:22:06.890257 sshd[4380]: Accepted publickey for core from 68.220.241.50 port 54250 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:06.891249 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:06.899895 systemd-logind[1552]: New session 22 of user core. Jan 28 06:22:06.902326 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 06:22:07.589152 sshd[4383]: Connection closed by 68.220.241.50 port 54250 Jan 28 06:22:07.590328 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:07.597472 systemd[1]: sshd@19-10.230.78.222:22-68.220.241.50:54250.service: Deactivated successfully. Jan 28 06:22:07.600483 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 06:22:07.603304 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Jan 28 06:22:07.604877 systemd-logind[1552]: Removed session 22. Jan 28 06:22:07.699032 systemd[1]: Started sshd@20-10.230.78.222:22-68.220.241.50:54256.service - OpenSSH per-connection server daemon (68.220.241.50:54256). Jan 28 06:22:08.304660 sshd[4393]: Accepted publickey for core from 68.220.241.50 port 54256 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:08.307889 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:08.314895 systemd-logind[1552]: New session 23 of user core. Jan 28 06:22:08.325277 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 06:22:08.829456 sshd[4396]: Connection closed by 68.220.241.50 port 54256 Jan 28 06:22:08.831736 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:08.839135 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Jan 28 06:22:08.840282 systemd[1]: sshd@20-10.230.78.222:22-68.220.241.50:54256.service: Deactivated successfully. Jan 28 06:22:08.844024 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 06:22:08.846572 systemd-logind[1552]: Removed session 23. Jan 28 06:22:13.934024 systemd[1]: Started sshd@21-10.230.78.222:22-68.220.241.50:45600.service - OpenSSH per-connection server daemon (68.220.241.50:45600). Jan 28 06:22:14.534702 sshd[4410]: Accepted publickey for core from 68.220.241.50 port 45600 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:14.536734 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:14.545397 systemd-logind[1552]: New session 24 of user core. Jan 28 06:22:14.551420 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 06:22:15.043396 sshd[4413]: Connection closed by 68.220.241.50 port 45600 Jan 28 06:22:15.044351 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:15.050191 systemd[1]: sshd@21-10.230.78.222:22-68.220.241.50:45600.service: Deactivated successfully. Jan 28 06:22:15.054155 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 06:22:15.055886 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. Jan 28 06:22:15.058040 systemd-logind[1552]: Removed session 24. Jan 28 06:22:16.575056 update_engine[1554]: I20260128 06:22:16.574152 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:22:16.575056 update_engine[1554]: I20260128 06:22:16.574297 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:22:16.575056 update_engine[1554]: I20260128 06:22:16.574836 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:22:16.576497 update_engine[1554]: E20260128 06:22:16.576336 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 06:22:16.576600 update_engine[1554]: I20260128 06:22:16.576558 1554 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 06:22:16.576600 update_engine[1554]: I20260128 06:22:16.576581 1554 omaha_request_action.cc:617] Omaha request response: Jan 28 06:22:16.576848 update_engine[1554]: E20260128 06:22:16.576814 1554 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 06:22:16.577114 update_engine[1554]: I20260128 06:22:16.577005 1554 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 06:22:16.577175 update_engine[1554]: I20260128 06:22:16.577109 1554 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 06:22:16.577175 update_engine[1554]: I20260128 06:22:16.577127 1554 update_attempter.cc:306] Processing Done. Jan 28 06:22:16.577253 update_engine[1554]: E20260128 06:22:16.577201 1554 update_attempter.cc:619] Update failed. Jan 28 06:22:16.577304 update_engine[1554]: I20260128 06:22:16.577229 1554 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 06:22:16.577304 update_engine[1554]: I20260128 06:22:16.577279 1554 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 06:22:16.577304 update_engine[1554]: I20260128 06:22:16.577292 1554 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.577431 1554 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.577486 1554 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.577531 1554 omaha_request_action.cc:272] Request: Jan 28 06:22:16.579372 update_engine[1554]: Jan 28 06:22:16.579372 update_engine[1554]: Jan 28 06:22:16.579372 update_engine[1554]: Jan 28 06:22:16.579372 update_engine[1554]: Jan 28 06:22:16.579372 update_engine[1554]: Jan 28 06:22:16.579372 update_engine[1554]: Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.577544 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.577606 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578162 1554 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:22:16.579372 update_engine[1554]: E20260128 06:22:16.578429 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578523 1554 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578542 1554 omaha_request_action.cc:617] Omaha request response: Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578553 1554 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578563 1554 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578572 1554 update_attempter.cc:306] Processing Done. Jan 28 06:22:16.579372 update_engine[1554]: I20260128 06:22:16.578583 1554 update_attempter.cc:310] Error event sent. Jan 28 06:22:16.580171 update_engine[1554]: I20260128 06:22:16.578596 1554 update_check_scheduler.cc:74] Next update check in 41m55s Jan 28 06:22:16.580306 locksmithd[1588]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 06:22:16.580306 locksmithd[1588]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 06:22:20.145753 systemd[1]: Started sshd@22-10.230.78.222:22-68.220.241.50:45606.service - OpenSSH per-connection server daemon (68.220.241.50:45606). Jan 28 06:22:20.723265 sshd[4430]: Accepted publickey for core from 68.220.241.50 port 45606 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:20.724995 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:20.733150 systemd-logind[1552]: New session 25 of user core. Jan 28 06:22:20.743318 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 06:22:21.210089 sshd[4433]: Connection closed by 68.220.241.50 port 45606 Jan 28 06:22:21.210944 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:21.216639 systemd[1]: sshd@22-10.230.78.222:22-68.220.241.50:45606.service: Deactivated successfully. Jan 28 06:22:21.219618 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 06:22:21.221035 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. Jan 28 06:22:21.223624 systemd-logind[1552]: Removed session 25. Jan 28 06:22:26.320389 systemd[1]: Started sshd@23-10.230.78.222:22-68.220.241.50:42896.service - OpenSSH per-connection server daemon (68.220.241.50:42896). Jan 28 06:22:26.908983 sshd[4445]: Accepted publickey for core from 68.220.241.50 port 42896 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:26.910921 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:26.918909 systemd-logind[1552]: New session 26 of user core. Jan 28 06:22:26.926771 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 06:22:27.403244 sshd[4448]: Connection closed by 68.220.241.50 port 42896 Jan 28 06:22:27.404340 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:27.411678 systemd[1]: sshd@23-10.230.78.222:22-68.220.241.50:42896.service: Deactivated successfully. Jan 28 06:22:27.415614 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 06:22:27.416979 systemd-logind[1552]: Session 26 logged out. Waiting for processes to exit. Jan 28 06:22:27.419620 systemd-logind[1552]: Removed session 26. Jan 28 06:22:27.505480 systemd[1]: Started sshd@24-10.230.78.222:22-68.220.241.50:42906.service - OpenSSH per-connection server daemon (68.220.241.50:42906). Jan 28 06:22:28.080309 sshd[4460]: Accepted publickey for core from 68.220.241.50 port 42906 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:28.082975 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:28.090047 systemd-logind[1552]: New session 27 of user core. Jan 28 06:22:28.111372 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 06:22:31.008807 containerd[1577]: time="2026-01-28T06:22:31.008578569Z" level=info msg="StopContainer for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" with timeout 30 (s)" Jan 28 06:22:31.014752 containerd[1577]: time="2026-01-28T06:22:31.014500793Z" level=info msg="Stop container \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" with signal terminated" Jan 28 06:22:31.050657 systemd[1]: cri-containerd-d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545.scope: Deactivated successfully. Jan 28 06:22:31.059198 containerd[1577]: time="2026-01-28T06:22:31.059114992Z" level=info msg="received container exit event container_id:\"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" id:\"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" pid:3463 exited_at:{seconds:1769581351 nanos:56507866}" Jan 28 06:22:31.080015 containerd[1577]: time="2026-01-28T06:22:31.079368711Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 06:22:31.095628 kubelet[2893]: E0128 06:22:31.095551 2893 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 06:22:31.099113 containerd[1577]: time="2026-01-28T06:22:31.099017677Z" level=info msg="StopContainer for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" with timeout 2 (s)" Jan 28 06:22:31.099844 containerd[1577]: time="2026-01-28T06:22:31.099778795Z" level=info msg="Stop container \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" with signal terminated" Jan 28 06:22:31.119249 systemd-networkd[1494]: lxc_health: Link DOWN Jan 28 06:22:31.121442 systemd-networkd[1494]: lxc_health: Lost carrier Jan 28 06:22:31.129040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545-rootfs.mount: Deactivated successfully. Jan 28 06:22:31.150508 systemd[1]: cri-containerd-aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a.scope: Deactivated successfully. Jan 28 06:22:31.150938 systemd[1]: cri-containerd-aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a.scope: Consumed 10.149s CPU time, 202.9M memory peak, 77.9M read from disk, 13.3M written to disk. Jan 28 06:22:31.152189 containerd[1577]: time="2026-01-28T06:22:31.152128050Z" level=info msg="received container exit event container_id:\"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" id:\"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" pid:3531 exited_at:{seconds:1769581351 nanos:151354398}" Jan 28 06:22:31.165195 containerd[1577]: time="2026-01-28T06:22:31.165130090Z" level=info msg="StopContainer for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" returns successfully" Jan 28 06:22:31.166652 containerd[1577]: time="2026-01-28T06:22:31.166615473Z" level=info msg="StopPodSandbox for \"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\"" Jan 28 06:22:31.167077 containerd[1577]: time="2026-01-28T06:22:31.166898973Z" level=info msg="Container to stop \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 06:22:31.181599 systemd[1]: cri-containerd-06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379.scope: Deactivated successfully. Jan 28 06:22:31.192628 containerd[1577]: time="2026-01-28T06:22:31.192476255Z" level=info msg="received sandbox exit event container_id:\"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" id:\"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" exit_status:137 exited_at:{seconds:1769581351 nanos:191820336}" monitor_name=podsandbox Jan 28 06:22:31.197159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a-rootfs.mount: Deactivated successfully. Jan 28 06:22:31.207473 containerd[1577]: time="2026-01-28T06:22:31.207429403Z" level=info msg="StopContainer for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" returns successfully" Jan 28 06:22:31.208530 containerd[1577]: time="2026-01-28T06:22:31.208202968Z" level=info msg="StopPodSandbox for \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\"" Jan 28 06:22:31.208605 containerd[1577]: time="2026-01-28T06:22:31.208537036Z" level=info msg="Container to stop \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 06:22:31.208605 containerd[1577]: time="2026-01-28T06:22:31.208560548Z" level=info msg="Container to stop \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 06:22:31.209740 containerd[1577]: time="2026-01-28T06:22:31.208579597Z" level=info msg="Container to stop \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 06:22:31.209810 containerd[1577]: time="2026-01-28T06:22:31.209738602Z" level=info msg="Container to stop \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 06:22:31.209810 containerd[1577]: time="2026-01-28T06:22:31.209756911Z" level=info msg="Container to stop \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 06:22:31.233318 systemd[1]: cri-containerd-d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df.scope: Deactivated successfully. Jan 28 06:22:31.237714 containerd[1577]: time="2026-01-28T06:22:31.237660848Z" level=info msg="received sandbox exit event container_id:\"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" id:\"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" exit_status:137 exited_at:{seconds:1769581351 nanos:233294572}" monitor_name=podsandbox Jan 28 06:22:31.259543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379-rootfs.mount: Deactivated successfully. Jan 28 06:22:31.264533 containerd[1577]: time="2026-01-28T06:22:31.264476341Z" level=info msg="shim disconnected" id=06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379 namespace=k8s.io Jan 28 06:22:31.264533 containerd[1577]: time="2026-01-28T06:22:31.264519652Z" level=warning msg="cleaning up after shim disconnected" id=06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379 namespace=k8s.io Jan 28 06:22:31.273838 containerd[1577]: time="2026-01-28T06:22:31.264544719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 06:22:31.289932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df-rootfs.mount: Deactivated successfully. Jan 28 06:22:31.292506 containerd[1577]: time="2026-01-28T06:22:31.292463473Z" level=info msg="shim disconnected" id=d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df namespace=k8s.io Jan 28 06:22:31.292706 containerd[1577]: time="2026-01-28T06:22:31.292679572Z" level=warning msg="cleaning up after shim disconnected" id=d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df namespace=k8s.io Jan 28 06:22:31.292936 containerd[1577]: time="2026-01-28T06:22:31.292807965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 06:22:31.310480 containerd[1577]: time="2026-01-28T06:22:31.309556147Z" level=info msg="received sandbox container exit event sandbox_id:\"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" exit_status:137 exited_at:{seconds:1769581351 nanos:191820336}" monitor_name=criService Jan 28 06:22:31.317491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379-shm.mount: Deactivated successfully. Jan 28 06:22:31.322588 containerd[1577]: time="2026-01-28T06:22:31.322540695Z" level=info msg="TearDown network for sandbox \"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" successfully" Jan 28 06:22:31.322693 containerd[1577]: time="2026-01-28T06:22:31.322589250Z" level=info msg="StopPodSandbox for \"06b5579fe9fe4831dffb63177f0f790a19ccf17d944aa2a5524c32ff69fd6379\" returns successfully" Jan 28 06:22:31.332387 containerd[1577]: time="2026-01-28T06:22:31.332319776Z" level=info msg="received sandbox container exit event sandbox_id:\"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" exit_status:137 exited_at:{seconds:1769581351 nanos:233294572}" monitor_name=criService Jan 28 06:22:31.336020 containerd[1577]: time="2026-01-28T06:22:31.335902356Z" level=info msg="TearDown network for sandbox \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" successfully" Jan 28 06:22:31.336020 containerd[1577]: time="2026-01-28T06:22:31.335933983Z" level=info msg="StopPodSandbox for \"d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df\" returns successfully" Jan 28 06:22:31.387170 kubelet[2893]: I0128 06:22:31.386794 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr949\" (UniqueName: \"kubernetes.io/projected/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-kube-api-access-vr949\") pod \"d9a5f7cf-ed9d-448d-b8f3-0aadae891adb\" (UID: \"d9a5f7cf-ed9d-448d-b8f3-0aadae891adb\") " Jan 28 06:22:31.387461 kubelet[2893]: I0128 06:22:31.387222 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-cilium-config-path\") pod \"d9a5f7cf-ed9d-448d-b8f3-0aadae891adb\" (UID: \"d9a5f7cf-ed9d-448d-b8f3-0aadae891adb\") " Jan 28 06:22:31.392520 kubelet[2893]: I0128 06:22:31.392183 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9a5f7cf-ed9d-448d-b8f3-0aadae891adb" (UID: "d9a5f7cf-ed9d-448d-b8f3-0aadae891adb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 06:22:31.395766 kubelet[2893]: I0128 06:22:31.395668 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-kube-api-access-vr949" (OuterVolumeSpecName: "kube-api-access-vr949") pod "d9a5f7cf-ed9d-448d-b8f3-0aadae891adb" (UID: "d9a5f7cf-ed9d-448d-b8f3-0aadae891adb"). InnerVolumeSpecName "kube-api-access-vr949". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 06:22:31.456714 kubelet[2893]: I0128 06:22:31.456628 2893 scope.go:117] "RemoveContainer" containerID="d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545" Jan 28 06:22:31.464994 containerd[1577]: time="2026-01-28T06:22:31.464789466Z" level=info msg="RemoveContainer for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\"" Jan 28 06:22:31.474913 systemd[1]: Removed slice kubepods-besteffort-podd9a5f7cf_ed9d_448d_b8f3_0aadae891adb.slice - libcontainer container kubepods-besteffort-podd9a5f7cf_ed9d_448d_b8f3_0aadae891adb.slice. Jan 28 06:22:31.483371 containerd[1577]: time="2026-01-28T06:22:31.483179596Z" level=info msg="RemoveContainer for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" returns successfully" Jan 28 06:22:31.483834 kubelet[2893]: I0128 06:22:31.483792 2893 scope.go:117] "RemoveContainer" containerID="d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545" Jan 28 06:22:31.484281 containerd[1577]: time="2026-01-28T06:22:31.484148369Z" level=error msg="ContainerStatus for \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\": not found" Jan 28 06:22:31.484985 kubelet[2893]: E0128 06:22:31.484773 2893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\": not found" containerID="d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545" Jan 28 06:22:31.484985 kubelet[2893]: I0128 06:22:31.484861 2893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545"} err="failed to get container status \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\": rpc error: code = NotFound desc = an error occurred when try to find container \"d550fc7325f5c12cad44552afcb0e44ade5581d188540ccd33c1d6cd962b0545\": not found" Jan 28 06:22:31.484985 kubelet[2893]: I0128 06:22:31.484946 2893 scope.go:117] "RemoveContainer" containerID="aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a" Jan 28 06:22:31.488017 kubelet[2893]: I0128 06:22:31.487955 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-cgroup\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.488667 kubelet[2893]: I0128 06:22:31.488642 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-net\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.489281 kubelet[2893]: I0128 06:22:31.489230 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-clustermesh-secrets\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.489649 kubelet[2893]: I0128 06:22:31.489594 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-kernel\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.489814 kubelet[2893]: I0128 06:22:31.489764 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-etc-cni-netd\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.490105 kubelet[2893]: I0128 06:22:31.489798 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-bpf-maps\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.490576 kubelet[2893]: I0128 06:22:31.490553 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hubble-tls\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.490786 kubelet[2893]: I0128 06:22:31.490672 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxtw8\" (UniqueName: \"kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-kube-api-access-mxtw8\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.491539 kubelet[2893]: I0128 06:22:31.491097 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-lib-modules\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.491539 kubelet[2893]: I0128 06:22:31.491155 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-config-path\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.491539 kubelet[2893]: I0128 06:22:31.491185 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cni-path\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.491539 kubelet[2893]: I0128 06:22:31.491212 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-run\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.491539 kubelet[2893]: I0128 06:22:31.491236 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hostproc\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.491539 kubelet[2893]: I0128 06:22:31.491258 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-xtables-lock\") pod \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\" (UID: \"e3fce5c6-42b1-47a6-8aba-c0df5ac758aa\") " Jan 28 06:22:31.492050 kubelet[2893]: I0128 06:22:31.491344 2893 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-cilium-config-path\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.492050 kubelet[2893]: I0128 06:22:31.491368 2893 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vr949\" (UniqueName: \"kubernetes.io/projected/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb-kube-api-access-vr949\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.492050 kubelet[2893]: I0128 06:22:31.491408 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.494439 kubelet[2893]: I0128 06:22:31.492537 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.494439 kubelet[2893]: I0128 06:22:31.492579 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.494439 kubelet[2893]: I0128 06:22:31.493175 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.494439 kubelet[2893]: I0128 06:22:31.493370 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.494439 kubelet[2893]: I0128 06:22:31.494127 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.494752 containerd[1577]: time="2026-01-28T06:22:31.494284700Z" level=info msg="RemoveContainer for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\"" Jan 28 06:22:31.495597 kubelet[2893]: I0128 06:22:31.495381 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.495597 kubelet[2893]: I0128 06:22:31.495459 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.495597 kubelet[2893]: I0128 06:22:31.495472 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.495597 kubelet[2893]: I0128 06:22:31.495513 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 06:22:31.515207 kubelet[2893]: I0128 06:22:31.515033 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 06:22:31.516145 kubelet[2893]: I0128 06:22:31.515433 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 06:22:31.516145 kubelet[2893]: I0128 06:22:31.515972 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 06:22:31.521716 kubelet[2893]: I0128 06:22:31.521403 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-kube-api-access-mxtw8" (OuterVolumeSpecName: "kube-api-access-mxtw8") pod "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" (UID: "e3fce5c6-42b1-47a6-8aba-c0df5ac758aa"). InnerVolumeSpecName "kube-api-access-mxtw8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 06:22:31.521821 containerd[1577]: time="2026-01-28T06:22:31.521634233Z" level=info msg="RemoveContainer for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" returns successfully" Jan 28 06:22:31.524208 kubelet[2893]: I0128 06:22:31.524145 2893 scope.go:117] "RemoveContainer" containerID="974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f" Jan 28 06:22:31.529619 containerd[1577]: time="2026-01-28T06:22:31.529576905Z" level=info msg="RemoveContainer for \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\"" Jan 28 06:22:31.536143 containerd[1577]: time="2026-01-28T06:22:31.536113969Z" level=info msg="RemoveContainer for \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\" returns successfully" Jan 28 06:22:31.536536 kubelet[2893]: I0128 06:22:31.536502 2893 scope.go:117] "RemoveContainer" containerID="97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3" Jan 28 06:22:31.540242 containerd[1577]: time="2026-01-28T06:22:31.540163349Z" level=info msg="RemoveContainer for \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\"" Jan 28 06:22:31.545514 containerd[1577]: time="2026-01-28T06:22:31.545441171Z" level=info msg="RemoveContainer for \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\" returns successfully" Jan 28 06:22:31.545949 kubelet[2893]: I0128 06:22:31.545920 2893 scope.go:117] "RemoveContainer" containerID="e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0" Jan 28 06:22:31.548622 containerd[1577]: time="2026-01-28T06:22:31.548592525Z" level=info msg="RemoveContainer for \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\"" Jan 28 06:22:31.552680 containerd[1577]: time="2026-01-28T06:22:31.552648862Z" level=info msg="RemoveContainer for \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\" returns successfully" Jan 28 06:22:31.553038 kubelet[2893]: I0128 06:22:31.553007 2893 scope.go:117] "RemoveContainer" containerID="5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687" Jan 28 06:22:31.556031 containerd[1577]: time="2026-01-28T06:22:31.555937274Z" level=info msg="RemoveContainer for \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\"" Jan 28 06:22:31.560171 containerd[1577]: time="2026-01-28T06:22:31.560115807Z" level=info msg="RemoveContainer for \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\" returns successfully" Jan 28 06:22:31.560440 kubelet[2893]: I0128 06:22:31.560396 2893 scope.go:117] "RemoveContainer" containerID="aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a" Jan 28 06:22:31.560721 containerd[1577]: time="2026-01-28T06:22:31.560667183Z" level=error msg="ContainerStatus for \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\": not found" Jan 28 06:22:31.561043 kubelet[2893]: E0128 06:22:31.561005 2893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\": not found" containerID="aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a" Jan 28 06:22:31.561147 kubelet[2893]: I0128 06:22:31.561057 2893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a"} err="failed to get container status \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\": rpc error: code = NotFound desc = an error occurred when try to find container \"aed69fec934ab148fc925834837c87dac48468a7dfa655e47c8ecf26573a942a\": not found" Jan 28 06:22:31.561147 kubelet[2893]: I0128 06:22:31.561122 2893 scope.go:117] "RemoveContainer" containerID="974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f" Jan 28 06:22:31.561414 containerd[1577]: time="2026-01-28T06:22:31.561372892Z" level=error msg="ContainerStatus for \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\": not found" Jan 28 06:22:31.561677 kubelet[2893]: E0128 06:22:31.561639 2893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\": not found" containerID="974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f" Jan 28 06:22:31.561803 kubelet[2893]: I0128 06:22:31.561676 2893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f"} err="failed to get container status \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\": rpc error: code = NotFound desc = an error occurred when try to find container \"974398b1e60f2eb1bfa2730dddafb4dbd56596b944458f8a0813357490e6e19f\": not found" Jan 28 06:22:31.561803 kubelet[2893]: I0128 06:22:31.561729 2893 scope.go:117] "RemoveContainer" containerID="97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3" Jan 28 06:22:31.562261 containerd[1577]: time="2026-01-28T06:22:31.562016948Z" level=error msg="ContainerStatus for \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\": not found" Jan 28 06:22:31.562395 kubelet[2893]: E0128 06:22:31.562289 2893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\": not found" containerID="97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3" Jan 28 06:22:31.562464 kubelet[2893]: I0128 06:22:31.562386 2893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3"} err="failed to get container status \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\": rpc error: code = NotFound desc = an error occurred when try to find container \"97aa7e4d24edbe2c4593ad76e22979476b1d630869883b91b894dadd78c95ae3\": not found" Jan 28 06:22:31.562464 kubelet[2893]: I0128 06:22:31.562412 2893 scope.go:117] "RemoveContainer" containerID="e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0" Jan 28 06:22:31.562714 containerd[1577]: time="2026-01-28T06:22:31.562605556Z" level=error msg="ContainerStatus for \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\": not found" Jan 28 06:22:31.562831 kubelet[2893]: E0128 06:22:31.562783 2893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\": not found" containerID="e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0" Jan 28 06:22:31.562924 kubelet[2893]: I0128 06:22:31.562810 2893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0"} err="failed to get container status \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3fe7dfaaae167cca3260a53161f2e280f4b5297d869608a6b61f93e273dd2d0\": not found" Jan 28 06:22:31.562924 kubelet[2893]: I0128 06:22:31.562848 2893 scope.go:117] "RemoveContainer" containerID="5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687" Jan 28 06:22:31.563395 containerd[1577]: time="2026-01-28T06:22:31.563270005Z" level=error msg="ContainerStatus for \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\": not found" Jan 28 06:22:31.563633 kubelet[2893]: E0128 06:22:31.563553 2893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\": not found" containerID="5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687" Jan 28 06:22:31.563710 kubelet[2893]: I0128 06:22:31.563640 2893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687"} err="failed to get container status \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d010bd17bb4c8f68d5f05e52f6d71df90b7ceac7577a6ff3d39fccc6e645687\": not found" Jan 28 06:22:31.592044 kubelet[2893]: I0128 06:22:31.591989 2893 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-lib-modules\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592044 kubelet[2893]: I0128 06:22:31.592039 2893 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-config-path\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592044 kubelet[2893]: I0128 06:22:31.592058 2893 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cni-path\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592101 2893 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-run\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592116 2893 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hostproc\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592133 2893 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-xtables-lock\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592147 2893 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-cilium-cgroup\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592160 2893 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-net\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592174 2893 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-clustermesh-secrets\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592189 2893 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-host-proc-sys-kernel\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592374 kubelet[2893]: I0128 06:22:31.592203 2893 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-etc-cni-netd\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592753 kubelet[2893]: I0128 06:22:31.592217 2893 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-bpf-maps\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592753 kubelet[2893]: I0128 06:22:31.592230 2893 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-hubble-tls\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.592753 kubelet[2893]: I0128 06:22:31.592242 2893 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxtw8\" (UniqueName: \"kubernetes.io/projected/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa-kube-api-access-mxtw8\") on node \"srv-4e3e3.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:22:31.782308 systemd[1]: Removed slice kubepods-burstable-pode3fce5c6_42b1_47a6_8aba_c0df5ac758aa.slice - libcontainer container kubepods-burstable-pode3fce5c6_42b1_47a6_8aba_c0df5ac758aa.slice. Jan 28 06:22:31.782488 systemd[1]: kubepods-burstable-pode3fce5c6_42b1_47a6_8aba_c0df5ac758aa.slice: Consumed 10.315s CPU time, 203.3M memory peak, 79M read from disk, 13.3M written to disk. Jan 28 06:22:32.129003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6a78d7da59b7399dd11529753395da694b8284dd434685bb835e3b637b4c8df-shm.mount: Deactivated successfully. Jan 28 06:22:32.129181 systemd[1]: var-lib-kubelet-pods-d9a5f7cf\x2ded9d\x2d448d\x2db8f3\x2d0aadae891adb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvr949.mount: Deactivated successfully. Jan 28 06:22:32.129338 systemd[1]: var-lib-kubelet-pods-e3fce5c6\x2d42b1\x2d47a6\x2d8aba\x2dc0df5ac758aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxtw8.mount: Deactivated successfully. Jan 28 06:22:32.129466 systemd[1]: var-lib-kubelet-pods-e3fce5c6\x2d42b1\x2d47a6\x2d8aba\x2dc0df5ac758aa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 06:22:32.129576 systemd[1]: var-lib-kubelet-pods-e3fce5c6\x2d42b1\x2d47a6\x2d8aba\x2dc0df5ac758aa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 06:22:32.875984 kubelet[2893]: I0128 06:22:32.875925 2893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a5f7cf-ed9d-448d-b8f3-0aadae891adb" path="/var/lib/kubelet/pods/d9a5f7cf-ed9d-448d-b8f3-0aadae891adb/volumes" Jan 28 06:22:32.879086 kubelet[2893]: I0128 06:22:32.879033 2893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3fce5c6-42b1-47a6-8aba-c0df5ac758aa" path="/var/lib/kubelet/pods/e3fce5c6-42b1-47a6-8aba-c0df5ac758aa/volumes" Jan 28 06:22:33.013171 sshd[4463]: Connection closed by 68.220.241.50 port 42906 Jan 28 06:22:33.015667 sshd-session[4460]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:33.022747 systemd[1]: sshd@24-10.230.78.222:22-68.220.241.50:42906.service: Deactivated successfully. Jan 28 06:22:33.025653 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 06:22:33.026354 systemd[1]: session-27.scope: Consumed 1.869s CPU time, 27.2M memory peak. Jan 28 06:22:33.028816 systemd-logind[1552]: Session 27 logged out. Waiting for processes to exit. Jan 28 06:22:33.031024 systemd-logind[1552]: Removed session 27. Jan 28 06:22:33.119517 systemd[1]: Started sshd@25-10.230.78.222:22-68.220.241.50:41878.service - OpenSSH per-connection server daemon (68.220.241.50:41878). Jan 28 06:22:33.611112 kubelet[2893]: I0128 06:22:33.609936 2893 setters.go:618] "Node became not ready" node="srv-4e3e3.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:22:33Z","lastTransitionTime":"2026-01-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 28 06:22:33.714822 sshd[4608]: Accepted publickey for core from 68.220.241.50 port 41878 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:33.716882 sshd-session[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:33.723689 systemd-logind[1552]: New session 28 of user core. Jan 28 06:22:33.734341 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 06:22:35.166661 systemd[1]: Created slice kubepods-burstable-pod342810a0_498f_45df_a512_38261e9ddf5e.slice - libcontainer container kubepods-burstable-pod342810a0_498f_45df_a512_38261e9ddf5e.slice. Jan 28 06:22:35.201241 sshd[4611]: Connection closed by 68.220.241.50 port 41878 Jan 28 06:22:35.201926 sshd-session[4608]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:35.209117 systemd-logind[1552]: Session 28 logged out. Waiting for processes to exit. Jan 28 06:22:35.210574 systemd[1]: sshd@25-10.230.78.222:22-68.220.241.50:41878.service: Deactivated successfully. Jan 28 06:22:35.215664 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 06:22:35.217131 kubelet[2893]: I0128 06:22:35.216632 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/342810a0-498f-45df-a512-38261e9ddf5e-cilium-config-path\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.220006 kubelet[2893]: I0128 06:22:35.217807 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-xtables-lock\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.220006 kubelet[2893]: I0128 06:22:35.218109 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-bpf-maps\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.220006 kubelet[2893]: I0128 06:22:35.218142 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-cilium-cgroup\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.220006 kubelet[2893]: I0128 06:22:35.218177 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/342810a0-498f-45df-a512-38261e9ddf5e-cilium-ipsec-secrets\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.220006 kubelet[2893]: I0128 06:22:35.218218 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-host-proc-sys-kernel\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.220006 kubelet[2893]: I0128 06:22:35.218277 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-lib-modules\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.221807 kubelet[2893]: I0128 06:22:35.218309 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-host-proc-sys-net\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.221807 kubelet[2893]: I0128 06:22:35.218337 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/342810a0-498f-45df-a512-38261e9ddf5e-clustermesh-secrets\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.221807 kubelet[2893]: I0128 06:22:35.218367 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slxc\" (UniqueName: \"kubernetes.io/projected/342810a0-498f-45df-a512-38261e9ddf5e-kube-api-access-9slxc\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.221807 kubelet[2893]: I0128 06:22:35.218397 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-cilium-run\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.221807 kubelet[2893]: I0128 06:22:35.218420 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-cni-path\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.221807 kubelet[2893]: I0128 06:22:35.218451 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-etc-cni-netd\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.223214 kubelet[2893]: I0128 06:22:35.218495 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/342810a0-498f-45df-a512-38261e9ddf5e-hostproc\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.223214 kubelet[2893]: I0128 06:22:35.218534 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/342810a0-498f-45df-a512-38261e9ddf5e-hubble-tls\") pod \"cilium-hvkz4\" (UID: \"342810a0-498f-45df-a512-38261e9ddf5e\") " pod="kube-system/cilium-hvkz4" Jan 28 06:22:35.222148 systemd-logind[1552]: Removed session 28. Jan 28 06:22:35.303248 systemd[1]: Started sshd@26-10.230.78.222:22-68.220.241.50:41894.service - OpenSSH per-connection server daemon (68.220.241.50:41894). Jan 28 06:22:35.479679 containerd[1577]: time="2026-01-28T06:22:35.478216466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvkz4,Uid:342810a0-498f-45df-a512-38261e9ddf5e,Namespace:kube-system,Attempt:0,}" Jan 28 06:22:35.501666 containerd[1577]: time="2026-01-28T06:22:35.501598777Z" level=info msg="connecting to shim fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1" address="unix:///run/containerd/s/b770da3f95fe3220a0199b0d86347a136faea73378fe47fbb37861918263b3b4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:22:35.541425 systemd[1]: Started cri-containerd-fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1.scope - libcontainer container fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1. Jan 28 06:22:35.584966 containerd[1577]: time="2026-01-28T06:22:35.584907413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvkz4,Uid:342810a0-498f-45df-a512-38261e9ddf5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\"" Jan 28 06:22:35.598396 containerd[1577]: time="2026-01-28T06:22:35.598306094Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 06:22:35.608894 containerd[1577]: time="2026-01-28T06:22:35.608658427Z" level=info msg="Container 54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:22:35.614821 containerd[1577]: time="2026-01-28T06:22:35.614750461Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f\"" Jan 28 06:22:35.615977 containerd[1577]: time="2026-01-28T06:22:35.615944139Z" level=info msg="StartContainer for \"54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f\"" Jan 28 06:22:35.617330 containerd[1577]: time="2026-01-28T06:22:35.617285419Z" level=info msg="connecting to shim 54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f" address="unix:///run/containerd/s/b770da3f95fe3220a0199b0d86347a136faea73378fe47fbb37861918263b3b4" protocol=ttrpc version=3 Jan 28 06:22:35.648415 systemd[1]: Started cri-containerd-54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f.scope - libcontainer container 54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f. Jan 28 06:22:35.706276 containerd[1577]: time="2026-01-28T06:22:35.706212901Z" level=info msg="StartContainer for \"54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f\" returns successfully" Jan 28 06:22:35.718491 systemd[1]: cri-containerd-54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f.scope: Deactivated successfully. Jan 28 06:22:35.719196 systemd[1]: cri-containerd-54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f.scope: Consumed 34ms CPU time, 9.4M memory peak, 3.1M read from disk. Jan 28 06:22:35.724097 containerd[1577]: time="2026-01-28T06:22:35.723963208Z" level=info msg="received container exit event container_id:\"54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f\" id:\"54fd86b257c9c06c5e18260d129ba3f115dcdd6f8a4102ab09fda32315c8745f\" pid:4687 exited_at:{seconds:1769581355 nanos:723431976}" Jan 28 06:22:35.916730 sshd[4621]: Accepted publickey for core from 68.220.241.50 port 41894 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:35.918667 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:35.928239 systemd-logind[1552]: New session 29 of user core. Jan 28 06:22:35.934373 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 06:22:36.098363 kubelet[2893]: E0128 06:22:36.098287 2893 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 06:22:36.312957 sshd[4719]: Connection closed by 68.220.241.50 port 41894 Jan 28 06:22:36.313888 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:36.319566 systemd[1]: sshd@26-10.230.78.222:22-68.220.241.50:41894.service: Deactivated successfully. Jan 28 06:22:36.323357 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 06:22:36.324950 systemd-logind[1552]: Session 29 logged out. Waiting for processes to exit. Jan 28 06:22:36.334047 systemd-logind[1552]: Removed session 29. Jan 28 06:22:36.422643 systemd[1]: Started sshd@27-10.230.78.222:22-68.220.241.50:41898.service - OpenSSH per-connection server daemon (68.220.241.50:41898). Jan 28 06:22:36.501560 containerd[1577]: time="2026-01-28T06:22:36.501499586Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 06:22:36.513122 containerd[1577]: time="2026-01-28T06:22:36.510043627Z" level=info msg="Container c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:22:36.521898 containerd[1577]: time="2026-01-28T06:22:36.521816069Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648\"" Jan 28 06:22:36.524436 containerd[1577]: time="2026-01-28T06:22:36.524400476Z" level=info msg="StartContainer for \"c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648\"" Jan 28 06:22:36.525670 containerd[1577]: time="2026-01-28T06:22:36.525558639Z" level=info msg="connecting to shim c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648" address="unix:///run/containerd/s/b770da3f95fe3220a0199b0d86347a136faea73378fe47fbb37861918263b3b4" protocol=ttrpc version=3 Jan 28 06:22:36.579824 systemd[1]: Started cri-containerd-c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648.scope - libcontainer container c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648. Jan 28 06:22:36.687110 containerd[1577]: time="2026-01-28T06:22:36.687036217Z" level=info msg="StartContainer for \"c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648\" returns successfully" Jan 28 06:22:36.700802 systemd[1]: cri-containerd-c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648.scope: Deactivated successfully. Jan 28 06:22:36.701510 systemd[1]: cri-containerd-c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648.scope: Consumed 35ms CPU time, 7.5M memory peak, 2.1M read from disk. Jan 28 06:22:36.705028 containerd[1577]: time="2026-01-28T06:22:36.704904014Z" level=info msg="received container exit event container_id:\"c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648\" id:\"c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648\" pid:4741 exited_at:{seconds:1769581356 nanos:704567423}" Jan 28 06:22:36.738411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c031390a7553bfbb2dd1d1b02f21d9a59e999df416d58f0c4249cd2f8bd4e648-rootfs.mount: Deactivated successfully. Jan 28 06:22:37.031305 sshd[4726]: Accepted publickey for core from 68.220.241.50 port 41898 ssh2: RSA SHA256:oJgDr+O9JDwG6d9uP3vzwTqVHAzRHE+J5Lddumvmt40 Jan 28 06:22:37.033586 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:22:37.041287 systemd-logind[1552]: New session 30 of user core. Jan 28 06:22:37.056417 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 06:22:37.505355 containerd[1577]: time="2026-01-28T06:22:37.505299988Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 06:22:37.526141 containerd[1577]: time="2026-01-28T06:22:37.525181816Z" level=info msg="Container 9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:22:37.532244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938001292.mount: Deactivated successfully. Jan 28 06:22:37.540506 containerd[1577]: time="2026-01-28T06:22:37.540394260Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28\"" Jan 28 06:22:37.542478 containerd[1577]: time="2026-01-28T06:22:37.542360502Z" level=info msg="StartContainer for \"9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28\"" Jan 28 06:22:37.545333 containerd[1577]: time="2026-01-28T06:22:37.545124747Z" level=info msg="connecting to shim 9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28" address="unix:///run/containerd/s/b770da3f95fe3220a0199b0d86347a136faea73378fe47fbb37861918263b3b4" protocol=ttrpc version=3 Jan 28 06:22:37.578361 systemd[1]: Started cri-containerd-9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28.scope - libcontainer container 9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28. Jan 28 06:22:37.693641 containerd[1577]: time="2026-01-28T06:22:37.693561970Z" level=info msg="StartContainer for \"9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28\" returns successfully" Jan 28 06:22:37.703448 systemd[1]: cri-containerd-9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28.scope: Deactivated successfully. Jan 28 06:22:37.706547 containerd[1577]: time="2026-01-28T06:22:37.706483340Z" level=info msg="received container exit event container_id:\"9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28\" id:\"9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28\" pid:4791 exited_at:{seconds:1769581357 nanos:705753657}" Jan 28 06:22:37.739276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a5f63f26dd43af3cf28150440602c7fd31b71129058fb9495248f9b0576cb28-rootfs.mount: Deactivated successfully. Jan 28 06:22:38.515735 containerd[1577]: time="2026-01-28T06:22:38.515647909Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 06:22:38.534609 containerd[1577]: time="2026-01-28T06:22:38.532957991Z" level=info msg="Container c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:22:38.539047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060331747.mount: Deactivated successfully. Jan 28 06:22:38.554782 containerd[1577]: time="2026-01-28T06:22:38.554705296Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd\"" Jan 28 06:22:38.556247 containerd[1577]: time="2026-01-28T06:22:38.555988555Z" level=info msg="StartContainer for \"c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd\"" Jan 28 06:22:38.557634 containerd[1577]: time="2026-01-28T06:22:38.557604847Z" level=info msg="connecting to shim c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd" address="unix:///run/containerd/s/b770da3f95fe3220a0199b0d86347a136faea73378fe47fbb37861918263b3b4" protocol=ttrpc version=3 Jan 28 06:22:38.587356 systemd[1]: Started cri-containerd-c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd.scope - libcontainer container c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd. Jan 28 06:22:38.626002 systemd[1]: cri-containerd-c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd.scope: Deactivated successfully. Jan 28 06:22:38.628538 containerd[1577]: time="2026-01-28T06:22:38.628461588Z" level=info msg="received container exit event container_id:\"c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd\" id:\"c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd\" pid:4830 exited_at:{seconds:1769581358 nanos:625608258}" Jan 28 06:22:38.631936 containerd[1577]: time="2026-01-28T06:22:38.631906139Z" level=info msg="StartContainer for \"c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd\" returns successfully" Jan 28 06:22:38.684853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c40b47042569de4822fc1fdcb56e11bdc4f506a26e128a90a5874e57ff89dfdd-rootfs.mount: Deactivated successfully. Jan 28 06:22:39.526486 containerd[1577]: time="2026-01-28T06:22:39.526351688Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 06:22:39.540135 containerd[1577]: time="2026-01-28T06:22:39.539199836Z" level=info msg="Container d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:22:39.547987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094970098.mount: Deactivated successfully. Jan 28 06:22:39.560674 containerd[1577]: time="2026-01-28T06:22:39.559899305Z" level=info msg="CreateContainer within sandbox \"fa6a3a8f2d0eed243298056c6ac0e5f8a64160e9002e1758a6c6cf047b7f64f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f\"" Jan 28 06:22:39.562858 containerd[1577]: time="2026-01-28T06:22:39.562802467Z" level=info msg="StartContainer for \"d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f\"" Jan 28 06:22:39.564573 containerd[1577]: time="2026-01-28T06:22:39.564470954Z" level=info msg="connecting to shim d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f" address="unix:///run/containerd/s/b770da3f95fe3220a0199b0d86347a136faea73378fe47fbb37861918263b3b4" protocol=ttrpc version=3 Jan 28 06:22:39.605284 systemd[1]: Started cri-containerd-d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f.scope - libcontainer container d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f. Jan 28 06:22:39.680136 containerd[1577]: time="2026-01-28T06:22:39.680051189Z" level=info msg="StartContainer for \"d573d51bb32e8eeded29e897461ded99a5d19cda7b03edeb28ee4ef0c388283f\" returns successfully" Jan 28 06:22:40.476373 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 28 06:22:40.556652 kubelet[2893]: I0128 06:22:40.556559 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hvkz4" podStartSLOduration=5.556537053 podStartE2EDuration="5.556537053s" podCreationTimestamp="2026-01-28 06:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:22:40.551771575 +0000 UTC m=+150.029342375" watchObservedRunningTime="2026-01-28 06:22:40.556537053 +0000 UTC m=+150.034107844" Jan 28 06:22:44.276718 systemd-networkd[1494]: lxc_health: Link UP Jan 28 06:22:44.279359 systemd-networkd[1494]: lxc_health: Gained carrier Jan 28 06:22:46.195342 systemd-networkd[1494]: lxc_health: Gained IPv6LL Jan 28 06:22:51.158530 sshd[4772]: Connection closed by 68.220.241.50 port 41898 Jan 28 06:22:51.160485 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Jan 28 06:22:51.170527 systemd[1]: sshd@27-10.230.78.222:22-68.220.241.50:41898.service: Deactivated successfully. Jan 28 06:22:51.174840 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 06:22:51.178165 systemd-logind[1552]: Session 30 logged out. Waiting for processes to exit. Jan 28 06:22:51.182537 systemd-logind[1552]: Removed session 30.