Jan 23 20:36:17.829939 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 20:36:17.829976 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 20:36:17.829986 kernel: BIOS-provided physical RAM map: Jan 23 20:36:17.829994 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 20:36:17.830003 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 20:36:17.830483 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 20:36:17.830495 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 23 20:36:17.830502 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 23 20:36:17.830510 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 20:36:17.830517 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 20:36:17.830524 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 20:36:17.830532 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 20:36:17.830539 kernel: NX (Execute Disable) protection: active Jan 23 20:36:17.830551 kernel: APIC: Static calls initialized Jan 23 20:36:17.830560 kernel: SMBIOS 2.8 present. Jan 23 20:36:17.830569 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 23 20:36:17.830577 kernel: DMI: Memory slots populated: 1/1 Jan 23 20:36:17.830585 kernel: Hypervisor detected: KVM Jan 23 20:36:17.830594 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 20:36:17.830604 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 20:36:17.830613 kernel: kvm-clock: using sched offset of 5210333786 cycles Jan 23 20:36:17.830622 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 20:36:17.830631 kernel: tsc: Detected 2294.608 MHz processor Jan 23 20:36:17.830640 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 20:36:17.830648 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 20:36:17.830657 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 20:36:17.830666 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 20:36:17.830674 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 20:36:17.830685 kernel: Using GB pages for direct mapping Jan 23 20:36:17.830694 kernel: ACPI: Early table checksum verification disabled Jan 23 20:36:17.830703 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 23 20:36:17.830712 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830729 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830738 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830747 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 23 20:36:17.830755 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830764 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830776 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830784 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 20:36:17.830793 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 23 20:36:17.830805 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 23 20:36:17.830814 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 23 20:36:17.830823 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 23 20:36:17.830834 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 23 20:36:17.830843 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 23 20:36:17.830852 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 23 20:36:17.830861 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 20:36:17.830870 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 23 20:36:17.830879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 23 20:36:17.830888 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 23 20:36:17.830897 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 23 20:36:17.830908 kernel: Zone ranges: Jan 23 20:36:17.830917 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 20:36:17.830926 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 23 20:36:17.830935 kernel: Normal empty Jan 23 20:36:17.830956 kernel: Device empty Jan 23 20:36:17.830965 kernel: Movable zone start for each node Jan 23 20:36:17.830974 kernel: Early memory node ranges Jan 23 20:36:17.830983 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 20:36:17.830992 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 23 20:36:17.831001 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 23 20:36:17.831012 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 20:36:17.831021 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 20:36:17.831030 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 23 20:36:17.831039 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 20:36:17.831048 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 20:36:17.831057 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 20:36:17.831066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 20:36:17.831075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 20:36:17.831084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 20:36:17.831095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 20:36:17.831104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 20:36:17.831113 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 20:36:17.831122 kernel: TSC deadline timer available Jan 23 20:36:17.831131 kernel: CPU topo: Max. logical packages: 16 Jan 23 20:36:17.831140 kernel: CPU topo: Max. logical dies: 16 Jan 23 20:36:17.831149 kernel: CPU topo: Max. dies per package: 1 Jan 23 20:36:17.831157 kernel: CPU topo: Max. threads per core: 1 Jan 23 20:36:17.831166 kernel: CPU topo: Num. cores per package: 1 Jan 23 20:36:17.831177 kernel: CPU topo: Num. threads per package: 1 Jan 23 20:36:17.831186 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 23 20:36:17.831195 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 20:36:17.831204 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 20:36:17.831213 kernel: Booting paravirtualized kernel on KVM Jan 23 20:36:17.831222 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 20:36:17.831231 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 23 20:36:17.831240 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 23 20:36:17.831249 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 23 20:36:17.831260 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 23 20:36:17.831269 kernel: kvm-guest: PV spinlocks enabled Jan 23 20:36:17.831278 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 20:36:17.831288 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 20:36:17.831297 kernel: random: crng init done Jan 23 20:36:17.831306 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 20:36:17.831315 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 20:36:17.831324 kernel: Fallback order for Node 0: 0 Jan 23 20:36:17.831335 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 23 20:36:17.831344 kernel: Policy zone: DMA32 Jan 23 20:36:17.831353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 20:36:17.831362 kernel: software IO TLB: area num 16. Jan 23 20:36:17.831371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 23 20:36:17.831380 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 20:36:17.831389 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 20:36:17.831398 kernel: Dynamic Preempt: voluntary Jan 23 20:36:17.831407 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 20:36:17.831419 kernel: rcu: RCU event tracing is enabled. Jan 23 20:36:17.831428 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 23 20:36:17.831437 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 20:36:17.831446 kernel: Rude variant of Tasks RCU enabled. Jan 23 20:36:17.831455 kernel: Tracing variant of Tasks RCU enabled. Jan 23 20:36:17.831464 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 20:36:17.831473 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 23 20:36:17.831482 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 20:36:17.831491 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 20:36:17.831502 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 20:36:17.831511 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 23 20:36:17.831520 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 20:36:17.831529 kernel: Console: colour VGA+ 80x25 Jan 23 20:36:17.831546 kernel: printk: legacy console [tty0] enabled Jan 23 20:36:17.831557 kernel: printk: legacy console [ttyS0] enabled Jan 23 20:36:17.831567 kernel: ACPI: Core revision 20240827 Jan 23 20:36:17.831576 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 20:36:17.831586 kernel: x2apic enabled Jan 23 20:36:17.831596 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 20:36:17.831605 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 23 20:36:17.831615 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 23 20:36:17.831627 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 20:36:17.831636 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 20:36:17.831645 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 20:36:17.831655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 20:36:17.831664 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 20:36:17.831673 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 20:36:17.831685 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 23 20:36:17.831694 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 23 20:36:17.831704 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 20:36:17.832005 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 20:36:17.832021 kernel: TAA: Mitigation: Clear CPU buffers Jan 23 20:36:17.832030 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 20:36:17.832040 kernel: GDS: Unknown: Dependent on hypervisor status Jan 23 20:36:17.832049 kernel: active return thunk: its_return_thunk Jan 23 20:36:17.832058 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 20:36:17.832068 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 20:36:17.832081 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 20:36:17.832090 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 20:36:17.832100 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 20:36:17.832109 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 20:36:17.832118 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 20:36:17.832128 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 20:36:17.832137 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 20:36:17.832147 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 20:36:17.832156 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 20:36:17.832165 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 20:36:17.832175 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 20:36:17.832184 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 20:36:17.832234 kernel: Freeing SMP alternatives memory: 32K Jan 23 20:36:17.832244 kernel: pid_max: default: 32768 minimum: 301 Jan 23 20:36:17.832253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 20:36:17.832262 kernel: landlock: Up and running. Jan 23 20:36:17.832271 kernel: SELinux: Initializing. Jan 23 20:36:17.832281 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 20:36:17.832290 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 20:36:17.832300 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Jan 23 20:36:17.832309 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 23 20:36:17.832319 kernel: signal: max sigframe size: 3632 Jan 23 20:36:17.832329 kernel: rcu: Hierarchical SRCU implementation. Jan 23 20:36:17.832342 kernel: rcu: Max phase no-delay instances is 400. Jan 23 20:36:17.832351 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 23 20:36:17.832361 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 20:36:17.832371 kernel: smp: Bringing up secondary CPUs ... Jan 23 20:36:17.832380 kernel: smpboot: x86: Booting SMP configuration: Jan 23 20:36:17.832390 kernel: .... node #0, CPUs: #1 Jan 23 20:36:17.832399 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 20:36:17.832408 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 23 20:36:17.832418 kernel: Memory: 1887500K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 203124K reserved, 0K cma-reserved) Jan 23 20:36:17.834751 kernel: devtmpfs: initialized Jan 23 20:36:17.834766 kernel: x86/mm: Memory block size: 128MB Jan 23 20:36:17.834776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 20:36:17.834786 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 23 20:36:17.834796 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 20:36:17.834805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 20:36:17.834815 kernel: audit: initializing netlink subsys (disabled) Jan 23 20:36:17.834825 kernel: audit: type=2000 audit(1769200575.106:1): state=initialized audit_enabled=0 res=1 Jan 23 20:36:17.834834 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 20:36:17.834848 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 20:36:17.834858 kernel: cpuidle: using governor menu Jan 23 20:36:17.834867 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 20:36:17.834877 kernel: dca service started, version 1.12.1 Jan 23 20:36:17.834886 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 20:36:17.834896 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 20:36:17.834906 kernel: PCI: Using configuration type 1 for base access Jan 23 20:36:17.834915 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 20:36:17.834925 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 20:36:17.834937 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 20:36:17.834956 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 20:36:17.834965 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 20:36:17.834975 kernel: ACPI: Added _OSI(Module Device) Jan 23 20:36:17.834984 kernel: ACPI: Added _OSI(Processor Device) Jan 23 20:36:17.834994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 20:36:17.835003 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 20:36:17.835013 kernel: ACPI: Interpreter enabled Jan 23 20:36:17.835022 kernel: ACPI: PM: (supports S0 S5) Jan 23 20:36:17.835034 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 20:36:17.835043 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 20:36:17.835053 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 20:36:17.835063 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 20:36:17.835072 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 20:36:17.835262 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 20:36:17.835359 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 20:36:17.835453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 20:36:17.835466 kernel: PCI host bridge to bus 0000:00 Jan 23 20:36:17.835584 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 20:36:17.835667 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 20:36:17.835761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 20:36:17.835842 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 20:36:17.835922 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 20:36:17.836013 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 23 20:36:17.836094 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 20:36:17.836209 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 20:36:17.836324 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 23 20:36:17.836417 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 23 20:36:17.836506 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 23 20:36:17.836596 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 23 20:36:17.836687 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 20:36:17.836813 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.836919 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 23 20:36:17.837016 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 20:36:17.837106 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 20:36:17.837194 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 20:36:17.837293 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.837389 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 23 20:36:17.837479 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 20:36:17.837569 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 20:36:17.837658 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 20:36:17.837765 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.837857 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 23 20:36:17.837959 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 20:36:17.838049 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 20:36:17.838139 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 20:36:17.838242 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.838332 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 23 20:36:17.838426 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 20:36:17.838508 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 20:36:17.838589 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 20:36:17.838688 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.839782 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 23 20:36:17.839877 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 20:36:17.839974 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 20:36:17.840057 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 20:36:17.840147 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.840229 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 23 20:36:17.840315 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 20:36:17.840397 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 20:36:17.840479 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 20:36:17.840566 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.840649 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 23 20:36:17.841781 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 20:36:17.841895 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 20:36:17.842007 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 20:36:17.842120 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 20:36:17.842224 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 23 20:36:17.842305 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 20:36:17.842387 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 20:36:17.842490 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 20:36:17.842596 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 20:36:17.842687 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 20:36:17.842835 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 23 20:36:17.842918 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 23 20:36:17.843005 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 23 20:36:17.843096 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 20:36:17.843209 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 23 20:36:17.843294 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 23 20:36:17.843375 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 23 20:36:17.843480 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 20:36:17.843570 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 20:36:17.843676 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 20:36:17.843791 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 23 20:36:17.844826 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 23 20:36:17.844962 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 20:36:17.845058 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 20:36:17.845170 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 20:36:17.845265 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 23 20:36:17.845359 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 20:36:17.845451 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 20:36:17.845545 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 20:36:17.845652 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 20:36:17.846820 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 23 20:36:17.846937 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 23 20:36:17.847048 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 20:36:17.847150 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 20:36:17.847245 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 23 20:36:17.847344 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 20:36:17.847447 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 20:36:17.847543 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 23 20:36:17.847634 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 20:36:17.847739 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 20:36:17.847831 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 20:36:17.847923 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 20:36:17.848024 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 20:36:17.848116 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 20:36:17.848129 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 20:36:17.848140 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 20:36:17.848149 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 20:36:17.848159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 20:36:17.848169 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 20:36:17.848178 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 20:36:17.848190 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 20:36:17.848200 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 20:36:17.848210 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 20:36:17.848219 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 20:36:17.848229 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 20:36:17.848238 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 20:36:17.848248 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 20:36:17.848257 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 20:36:17.848267 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 20:36:17.848279 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 20:36:17.848288 kernel: iommu: Default domain type: Translated Jan 23 20:36:17.848298 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 20:36:17.848308 kernel: PCI: Using ACPI for IRQ routing Jan 23 20:36:17.848317 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 20:36:17.848326 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 20:36:17.848336 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 23 20:36:17.848430 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 20:36:17.848531 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 20:36:17.848621 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 20:36:17.848634 kernel: vgaarb: loaded Jan 23 20:36:17.848644 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 20:36:17.848654 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 20:36:17.848663 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 20:36:17.848682 kernel: pnp: PnP ACPI init Jan 23 20:36:17.850832 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 20:36:17.850853 kernel: pnp: PnP ACPI: found 5 devices Jan 23 20:36:17.850868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 20:36:17.850878 kernel: NET: Registered PF_INET protocol family Jan 23 20:36:17.850888 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 20:36:17.850898 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 20:36:17.850908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 20:36:17.850918 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 20:36:17.850927 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 20:36:17.850937 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 20:36:17.850955 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 20:36:17.850965 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 20:36:17.850975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 20:36:17.850984 kernel: NET: Registered PF_XDP protocol family Jan 23 20:36:17.851086 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 23 20:36:17.851180 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 20:36:17.851272 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 20:36:17.851364 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 20:36:17.851457 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 20:36:17.851552 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 20:36:17.851642 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 20:36:17.851745 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 20:36:17.851835 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 20:36:17.851950 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 20:36:17.852045 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 20:36:17.852135 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 20:36:17.852228 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 20:36:17.852318 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 20:36:17.852408 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 20:36:17.852497 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 20:36:17.852590 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 20:36:17.852687 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 20:36:17.853756 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 20:36:17.853857 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 20:36:17.853958 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 20:36:17.854058 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 20:36:17.854151 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 20:36:17.854243 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 20:36:17.854334 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 20:36:17.854424 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 20:36:17.854515 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 20:36:17.854608 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 20:36:17.854698 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 20:36:17.854795 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 20:36:17.854893 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 20:36:17.854982 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 20:36:17.855065 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 20:36:17.855147 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 20:36:17.855232 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 20:36:17.855315 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 20:36:17.855396 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 20:36:17.855478 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 20:36:17.855560 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 20:36:17.855642 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 20:36:17.855814 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 20:36:17.855931 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 20:36:17.856032 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 20:36:17.856124 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 20:36:17.856216 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 20:36:17.856306 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 20:36:17.856397 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 20:36:17.856487 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 20:36:17.856582 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 20:36:17.856664 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 20:36:17.856754 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 20:36:17.856832 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 20:36:17.856927 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 20:36:17.857014 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 20:36:17.857094 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 20:36:17.857174 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 23 20:36:17.857308 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 20:36:17.857395 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 23 20:36:17.857482 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 20:36:17.857573 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 23 20:36:17.857665 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 23 20:36:17.857762 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 23 20:36:17.857850 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 20:36:17.857957 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 23 20:36:17.858073 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 23 20:36:17.858157 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 20:36:17.858255 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 23 20:36:17.858340 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 23 20:36:17.858423 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 20:36:17.858515 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 23 20:36:17.858600 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 23 20:36:17.858688 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 20:36:17.858793 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 23 20:36:17.858878 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 23 20:36:17.858971 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 20:36:17.859061 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 23 20:36:17.859144 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 23 20:36:17.859229 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 20:36:17.859324 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 23 20:36:17.859408 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 23 20:36:17.859492 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 20:36:17.859507 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 20:36:17.859517 kernel: PCI: CLS 0 bytes, default 64 Jan 23 20:36:17.859528 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 20:36:17.859538 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 23 20:36:17.859549 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 20:36:17.859562 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 23 20:36:17.859573 kernel: Initialise system trusted keyrings Jan 23 20:36:17.859583 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 20:36:17.859594 kernel: Key type asymmetric registered Jan 23 20:36:17.859604 kernel: Asymmetric key parser 'x509' registered Jan 23 20:36:17.859614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 20:36:17.859624 kernel: io scheduler mq-deadline registered Jan 23 20:36:17.859635 kernel: io scheduler kyber registered Jan 23 20:36:17.859645 kernel: io scheduler bfq registered Jan 23 20:36:17.860068 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 20:36:17.860173 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 20:36:17.860267 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.860360 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 20:36:17.860451 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 20:36:17.860542 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.860639 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 20:36:17.860742 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 20:36:17.860835 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.860926 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 20:36:17.861023 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 20:36:17.861114 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.861208 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 20:36:17.861319 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 20:36:17.861816 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.861935 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 20:36:17.863816 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 20:36:17.863964 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.864071 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 20:36:17.864168 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 20:36:17.864263 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.864355 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 20:36:17.864446 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 20:36:17.864538 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 20:36:17.864555 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 20:36:17.864567 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 20:36:17.864577 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 20:36:17.864588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 20:36:17.864598 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 20:36:17.864618 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 20:36:17.864628 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 20:36:17.864637 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 20:36:17.865710 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 20:36:17.865742 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 20:36:17.865855 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 20:36:17.865950 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T20:36:17 UTC (1769200577) Jan 23 20:36:17.866034 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 20:36:17.866048 kernel: intel_pstate: CPU model not supported Jan 23 20:36:17.866059 kernel: NET: Registered PF_INET6 protocol family Jan 23 20:36:17.866069 kernel: Segment Routing with IPv6 Jan 23 20:36:17.866080 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 20:36:17.866094 kernel: NET: Registered PF_PACKET protocol family Jan 23 20:36:17.866104 kernel: Key type dns_resolver registered Jan 23 20:36:17.866115 kernel: IPI shorthand broadcast: enabled Jan 23 20:36:17.866125 kernel: sched_clock: Marking stable (3134002138, 114668184)->(3400524004, -151853682) Jan 23 20:36:17.866135 kernel: registered taskstats version 1 Jan 23 20:36:17.866146 kernel: Loading compiled-in X.509 certificates Jan 23 20:36:17.866156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 20:36:17.866167 kernel: Demotion targets for Node 0: null Jan 23 20:36:17.866177 kernel: Key type .fscrypt registered Jan 23 20:36:17.866190 kernel: Key type fscrypt-provisioning registered Jan 23 20:36:17.866200 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 20:36:17.866210 kernel: ima: Allocated hash algorithm: sha1 Jan 23 20:36:17.866220 kernel: ima: No architecture policies found Jan 23 20:36:17.866231 kernel: clk: Disabling unused clocks Jan 23 20:36:17.866241 kernel: Warning: unable to open an initial console. Jan 23 20:36:17.866251 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 20:36:17.866262 kernel: Write protecting the kernel read-only data: 40960k Jan 23 20:36:17.866272 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 20:36:17.866285 kernel: Run /init as init process Jan 23 20:36:17.866295 kernel: with arguments: Jan 23 20:36:17.866306 kernel: /init Jan 23 20:36:17.866316 kernel: with environment: Jan 23 20:36:17.866326 kernel: HOME=/ Jan 23 20:36:17.866336 kernel: TERM=linux Jan 23 20:36:17.866348 systemd[1]: Successfully made /usr/ read-only. Jan 23 20:36:17.866361 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 20:36:17.866375 systemd[1]: Detected virtualization kvm. Jan 23 20:36:17.866386 systemd[1]: Detected architecture x86-64. Jan 23 20:36:17.866397 systemd[1]: Running in initrd. Jan 23 20:36:17.866408 systemd[1]: No hostname configured, using default hostname. Jan 23 20:36:17.866419 systemd[1]: Hostname set to . Jan 23 20:36:17.866430 systemd[1]: Initializing machine ID from VM UUID. Jan 23 20:36:17.866440 systemd[1]: Queued start job for default target initrd.target. Jan 23 20:36:17.866451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 20:36:17.866465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 20:36:17.866477 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 20:36:17.866488 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 20:36:17.866499 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 20:36:17.866520 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 20:36:17.866531 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 20:36:17.866543 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 20:36:17.866553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 20:36:17.866563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 20:36:17.866573 systemd[1]: Reached target paths.target - Path Units. Jan 23 20:36:17.866583 systemd[1]: Reached target slices.target - Slice Units. Jan 23 20:36:17.866593 systemd[1]: Reached target swap.target - Swaps. Jan 23 20:36:17.866603 systemd[1]: Reached target timers.target - Timer Units. Jan 23 20:36:17.866612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 20:36:17.866622 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 20:36:17.866634 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 20:36:17.866644 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 20:36:17.866654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 20:36:17.866664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 20:36:17.866674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 20:36:17.866684 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 20:36:17.866694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 20:36:17.866703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 20:36:17.866715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 20:36:17.868735 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 20:36:17.868748 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 20:36:17.868763 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 20:36:17.868774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 20:36:17.868784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:36:17.868823 systemd-journald[208]: Collecting audit messages is disabled. Jan 23 20:36:17.868852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 20:36:17.868863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 20:36:17.868875 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 20:36:17.868886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 20:36:17.868896 systemd-journald[208]: Journal started Jan 23 20:36:17.868918 systemd-journald[208]: Runtime Journal (/run/log/journal/e6a1a00433d141cbb44e98c9bc3ba8c9) is 4.7M, max 37.8M, 33.1M free. Jan 23 20:36:17.860095 systemd-modules-load[211]: Inserted module 'overlay' Jan 23 20:36:17.911085 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 20:36:17.911114 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 20:36:17.911129 kernel: Bridge firewalling registered Jan 23 20:36:17.887837 systemd-modules-load[211]: Inserted module 'br_netfilter' Jan 23 20:36:17.911089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 20:36:17.911918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:36:17.913606 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 20:36:17.919587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 20:36:17.921975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:36:17.935700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 20:36:17.938843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 20:36:17.945901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:36:17.960961 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 20:36:17.961573 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 20:36:17.963923 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 20:36:17.965320 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 20:36:17.967774 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 20:36:17.971853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 20:36:17.989031 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 20:36:18.015738 systemd-resolved[251]: Positive Trust Anchors: Jan 23 20:36:18.015753 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 20:36:18.015790 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 20:36:18.021779 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 23 20:36:18.023218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 20:36:18.024085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 20:36:18.081780 kernel: SCSI subsystem initialized Jan 23 20:36:18.095779 kernel: Loading iSCSI transport class v2.0-870. Jan 23 20:36:18.106757 kernel: iscsi: registered transport (tcp) Jan 23 20:36:18.130835 kernel: iscsi: registered transport (qla4xxx) Jan 23 20:36:18.130946 kernel: QLogic iSCSI HBA Driver Jan 23 20:36:18.155490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 20:36:18.172360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 20:36:18.174330 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 20:36:18.241174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 20:36:18.243007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 20:36:18.308811 kernel: raid6: avx512x4 gen() 17327 MB/s Jan 23 20:36:18.325779 kernel: raid6: avx512x2 gen() 17280 MB/s Jan 23 20:36:18.342783 kernel: raid6: avx512x1 gen() 17283 MB/s Jan 23 20:36:18.359808 kernel: raid6: avx2x4 gen() 17387 MB/s Jan 23 20:36:18.376787 kernel: raid6: avx2x2 gen() 17373 MB/s Jan 23 20:36:18.393798 kernel: raid6: avx2x1 gen() 13390 MB/s Jan 23 20:36:18.393906 kernel: raid6: using algorithm avx2x4 gen() 17387 MB/s Jan 23 20:36:18.411878 kernel: raid6: .... xor() 6689 MB/s, rmw enabled Jan 23 20:36:18.411990 kernel: raid6: using avx512x2 recovery algorithm Jan 23 20:36:18.434770 kernel: xor: automatically using best checksumming function avx Jan 23 20:36:18.610773 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 20:36:18.618590 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 20:36:18.621346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 20:36:18.650128 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 23 20:36:18.657230 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 20:36:18.663763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 20:36:18.690206 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 23 20:36:18.734081 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 20:36:18.737143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 20:36:18.809364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 20:36:18.814350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 20:36:18.883876 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 23 20:36:18.890447 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 23 20:36:18.907730 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 20:36:18.913045 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 20:36:18.913084 kernel: GPT:17805311 != 125829119 Jan 23 20:36:18.913103 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 20:36:18.913114 kernel: GPT:17805311 != 125829119 Jan 23 20:36:18.913125 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 20:36:18.913137 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 20:36:18.922732 kernel: AES CTR mode by8 optimization enabled Jan 23 20:36:18.937748 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 20:36:18.962048 kernel: ACPI: bus type USB registered Jan 23 20:36:18.962088 kernel: usbcore: registered new interface driver usbfs Jan 23 20:36:18.963033 kernel: usbcore: registered new interface driver hub Jan 23 20:36:18.963849 kernel: usbcore: registered new device driver usb Jan 23 20:36:18.972770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 20:36:18.986262 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 20:36:19.002737 kernel: libata version 3.00 loaded. Jan 23 20:36:19.008249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 20:36:19.008385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:36:19.012522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:36:19.015087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:36:19.017683 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 20:36:19.032465 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 20:36:19.032673 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 20:36:19.032689 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 20:36:19.035531 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 23 20:36:19.035722 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 20:36:19.035004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 20:36:19.047454 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 20:36:19.047617 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 20:36:19.047854 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 20:36:19.047979 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 20:36:19.048107 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 23 20:36:19.048220 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 23 20:36:19.048330 kernel: hub 1-0:1.0: USB hub found Jan 23 20:36:19.048461 kernel: scsi host0: ahci Jan 23 20:36:19.048564 kernel: scsi host1: ahci Jan 23 20:36:19.048659 kernel: scsi host2: ahci Jan 23 20:36:19.045787 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 20:36:19.052331 kernel: scsi host3: ahci Jan 23 20:36:19.052476 kernel: hub 1-0:1.0: 4 ports detected Jan 23 20:36:19.054730 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 20:36:19.056098 kernel: hub 2-0:1.0: USB hub found Jan 23 20:36:19.056233 kernel: hub 2-0:1.0: 4 ports detected Jan 23 20:36:19.058737 kernel: scsi host4: ahci Jan 23 20:36:19.062424 kernel: scsi host5: ahci Jan 23 20:36:19.062944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 20:36:19.073288 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Jan 23 20:36:19.073308 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Jan 23 20:36:19.073321 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Jan 23 20:36:19.073333 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Jan 23 20:36:19.073345 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Jan 23 20:36:19.073357 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Jan 23 20:36:19.074876 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 20:36:19.108378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:36:19.125769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 20:36:19.126029 disk-uuid[613]: Primary Header is updated. Jan 23 20:36:19.126029 disk-uuid[613]: Secondary Entries is updated. Jan 23 20:36:19.126029 disk-uuid[613]: Secondary Header is updated. Jan 23 20:36:19.295832 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 20:36:19.374255 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 20:36:19.374389 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 20:36:19.377151 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 20:36:19.379625 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 20:36:19.381853 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 20:36:19.384760 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 20:36:19.397123 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 20:36:19.398996 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 20:36:19.399634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 20:36:19.400625 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 20:36:19.402394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 20:36:19.424777 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 20:36:19.433749 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 20:36:19.439487 kernel: usbcore: registered new interface driver usbhid Jan 23 20:36:19.439520 kernel: usbhid: USB HID core driver Jan 23 20:36:19.443756 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 20:36:19.443826 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 23 20:36:20.146838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 20:36:20.149642 disk-uuid[614]: The operation has completed successfully. Jan 23 20:36:20.208251 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 20:36:20.208369 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 20:36:20.244562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 20:36:20.265683 sh[639]: Success Jan 23 20:36:20.290077 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 20:36:20.290140 kernel: device-mapper: uevent: version 1.0.3 Jan 23 20:36:20.291535 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 20:36:20.308814 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 20:36:20.361075 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 20:36:20.366378 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 20:36:20.371788 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 20:36:20.388853 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (651) Jan 23 20:36:20.388895 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 20:36:20.391014 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:36:20.398146 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 20:36:20.398204 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 20:36:20.399816 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 20:36:20.400791 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 20:36:20.401925 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 20:36:20.402740 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 20:36:20.405826 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 20:36:20.437027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (682) Jan 23 20:36:20.437064 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:36:20.439114 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:36:20.447111 kernel: BTRFS info (device vda6): turning on async discard Jan 23 20:36:20.447141 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 20:36:20.456755 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:36:20.459504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 20:36:20.461879 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 20:36:20.559259 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 20:36:20.562632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 20:36:20.623491 ignition[743]: Ignition 2.22.0 Jan 23 20:36:20.623503 ignition[743]: Stage: fetch-offline Jan 23 20:36:20.624529 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 20:36:20.623542 ignition[743]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:20.623551 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:20.623664 ignition[743]: parsed url from cmdline: "" Jan 23 20:36:20.623667 ignition[743]: no config URL provided Jan 23 20:36:20.623682 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 20:36:20.623690 ignition[743]: no config at "/usr/lib/ignition/user.ign" Jan 23 20:36:20.623697 ignition[743]: failed to fetch config: resource requires networking Jan 23 20:36:20.623863 ignition[743]: Ignition finished successfully Jan 23 20:36:20.630852 systemd-networkd[821]: lo: Link UP Jan 23 20:36:20.630858 systemd-networkd[821]: lo: Gained carrier Jan 23 20:36:20.632885 systemd-networkd[821]: Enumeration completed Jan 23 20:36:20.633256 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:36:20.633260 systemd-networkd[821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 20:36:20.633356 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 20:36:20.634685 systemd-networkd[821]: eth0: Link UP Jan 23 20:36:20.635020 systemd[1]: Reached target network.target - Network. Jan 23 20:36:20.635157 systemd-networkd[821]: eth0: Gained carrier Jan 23 20:36:20.635168 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:36:20.640191 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 20:36:20.647057 systemd-networkd[821]: eth0: DHCPv4 address 10.244.93.250/30, gateway 10.244.93.249 acquired from 10.244.93.249 Jan 23 20:36:20.665638 ignition[829]: Ignition 2.22.0 Jan 23 20:36:20.666364 ignition[829]: Stage: fetch Jan 23 20:36:20.666902 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:20.667291 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:20.667848 ignition[829]: parsed url from cmdline: "" Jan 23 20:36:20.667890 ignition[829]: no config URL provided Jan 23 20:36:20.668203 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 20:36:20.668212 ignition[829]: no config at "/usr/lib/ignition/user.ign" Jan 23 20:36:20.668337 ignition[829]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 20:36:20.668677 ignition[829]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 20:36:20.668741 ignition[829]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 20:36:20.689204 ignition[829]: GET result: OK Jan 23 20:36:20.689410 ignition[829]: parsing config with SHA512: 8da00463aba955806d4190a78ec74cdd90c5af049efd1715451924bdd7b99650a467183c367cf3d0e5262f2bc795251f09dbf20a1791459adec71e46b94c5194 Jan 23 20:36:20.695472 unknown[829]: fetched base config from "system" Jan 23 20:36:20.695495 unknown[829]: fetched base config from "system" Jan 23 20:36:20.695979 ignition[829]: fetch: fetch complete Jan 23 20:36:20.695502 unknown[829]: fetched user config from "openstack" Jan 23 20:36:20.695986 ignition[829]: fetch: fetch passed Jan 23 20:36:20.698589 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 20:36:20.696038 ignition[829]: Ignition finished successfully Jan 23 20:36:20.701899 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 20:36:20.733953 ignition[836]: Ignition 2.22.0 Jan 23 20:36:20.734484 ignition[836]: Stage: kargs Jan 23 20:36:20.734632 ignition[836]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:20.734640 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:20.735301 ignition[836]: kargs: kargs passed Jan 23 20:36:20.736469 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 20:36:20.735336 ignition[836]: Ignition finished successfully Jan 23 20:36:20.738428 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 20:36:20.767491 ignition[842]: Ignition 2.22.0 Jan 23 20:36:20.767500 ignition[842]: Stage: disks Jan 23 20:36:20.767615 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:20.767621 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:20.769297 ignition[842]: disks: disks passed Jan 23 20:36:20.769338 ignition[842]: Ignition finished successfully Jan 23 20:36:20.771971 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 20:36:20.773299 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 20:36:20.774119 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 20:36:20.774423 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 20:36:20.774699 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 20:36:20.775047 systemd[1]: Reached target basic.target - Basic System. Jan 23 20:36:20.776895 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 20:36:20.805549 systemd-fsck[851]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 20:36:20.808438 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 20:36:20.812861 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 20:36:20.946728 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 20:36:20.948182 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 20:36:20.950169 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 20:36:20.953321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 20:36:20.955991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 20:36:20.958537 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 20:36:20.964977 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 20:36:20.966049 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 20:36:20.967781 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 20:36:20.970034 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 20:36:20.973844 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 20:36:20.974882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jan 23 20:36:20.977142 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:36:20.977174 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:36:20.983745 kernel: BTRFS info (device vda6): turning on async discard Jan 23 20:36:20.983780 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 20:36:20.987854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 20:36:21.048686 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 20:36:21.052744 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:21.059772 initrd-setup-root[894]: cut: /sysroot/etc/group: No such file or directory Jan 23 20:36:21.070302 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 20:36:21.080705 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 20:36:21.192473 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 20:36:21.195810 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 20:36:21.197885 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 20:36:21.220752 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:36:21.235491 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 20:36:21.266315 ignition[977]: INFO : Ignition 2.22.0 Jan 23 20:36:21.267138 ignition[977]: INFO : Stage: mount Jan 23 20:36:21.267138 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:21.267138 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:21.268792 ignition[977]: INFO : mount: mount passed Jan 23 20:36:21.270165 ignition[977]: INFO : Ignition finished successfully Jan 23 20:36:21.271144 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 20:36:21.391313 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 20:36:22.080824 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:22.276992 systemd-networkd[821]: eth0: Gained IPv6LL Jan 23 20:36:23.788020 systemd-networkd[821]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:177e:24:19ff:fef4:5dfa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:177e:24:19ff:fef4:5dfa/64 assigned by NDisc. Jan 23 20:36:23.788041 systemd-networkd[821]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 20:36:24.093782 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:28.106775 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:28.118350 coreos-metadata[861]: Jan 23 20:36:28.118 WARN failed to locate config-drive, using the metadata service API instead Jan 23 20:36:28.136342 coreos-metadata[861]: Jan 23 20:36:28.136 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 20:36:28.208986 coreos-metadata[861]: Jan 23 20:36:28.208 INFO Fetch successful Jan 23 20:36:28.210698 coreos-metadata[861]: Jan 23 20:36:28.210 INFO wrote hostname srv-zm8g6.gb1.brightbox.com to /sysroot/etc/hostname Jan 23 20:36:28.215311 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 20:36:28.215620 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 20:36:28.220825 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 20:36:28.263761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 20:36:28.285753 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (992) Jan 23 20:36:28.285852 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 20:36:28.285890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 20:36:28.292976 kernel: BTRFS info (device vda6): turning on async discard Jan 23 20:36:28.293036 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 20:36:28.296400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 20:36:28.338829 ignition[1010]: INFO : Ignition 2.22.0 Jan 23 20:36:28.338829 ignition[1010]: INFO : Stage: files Jan 23 20:36:28.340183 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:28.340183 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:28.340183 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Jan 23 20:36:28.341988 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 20:36:28.341988 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 20:36:28.343095 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 20:36:28.343625 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 20:36:28.343625 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 20:36:28.343453 unknown[1010]: wrote ssh authorized keys file for user: core Jan 23 20:36:28.345694 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 20:36:28.346403 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 20:36:28.525557 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 20:36:28.762320 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 20:36:28.764395 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 20:36:28.764395 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 20:36:29.015055 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 20:36:29.325750 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 20:36:29.325750 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 20:36:29.328120 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 20:36:29.334216 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 20:36:29.334216 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 20:36:29.334216 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 20:36:29.334216 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 20:36:29.334216 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 20:36:29.334216 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 20:36:29.636158 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 20:36:30.845439 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 20:36:30.845439 ignition[1010]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 20:36:30.849788 ignition[1010]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 20:36:30.850971 ignition[1010]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 20:36:30.850971 ignition[1010]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 20:36:30.850971 ignition[1010]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 20:36:30.850971 ignition[1010]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 20:36:30.854884 ignition[1010]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 20:36:30.854884 ignition[1010]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 20:36:30.854884 ignition[1010]: INFO : files: files passed Jan 23 20:36:30.854884 ignition[1010]: INFO : Ignition finished successfully Jan 23 20:36:30.857312 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 20:36:30.861855 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 20:36:30.865078 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 20:36:30.877514 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 20:36:30.877628 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 20:36:30.886646 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 20:36:30.886646 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 20:36:30.889784 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 20:36:30.892098 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 20:36:30.893637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 20:36:30.895533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 20:36:30.965239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 20:36:30.966550 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 20:36:30.967827 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 20:36:30.970094 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 20:36:30.971606 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 20:36:30.972893 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 20:36:31.008628 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 20:36:31.011842 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 20:36:31.057788 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 20:36:31.059628 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 20:36:31.061351 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 20:36:31.062263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 20:36:31.062385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 20:36:31.063809 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 20:36:31.064348 systemd[1]: Stopped target basic.target - Basic System. Jan 23 20:36:31.065201 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 20:36:31.066060 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 20:36:31.067055 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 20:36:31.068027 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 20:36:31.068967 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 20:36:31.069893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 20:36:31.070883 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 20:36:31.071864 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 20:36:31.076058 systemd[1]: Stopped target swap.target - Swaps. Jan 23 20:36:31.076778 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 20:36:31.076919 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 20:36:31.077755 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 20:36:31.078258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 20:36:31.079071 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 20:36:31.079162 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 20:36:31.079829 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 20:36:31.079950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 20:36:31.080868 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 20:36:31.080982 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 20:36:31.081989 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 20:36:31.082086 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 20:36:31.084812 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 20:36:31.085580 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 20:36:31.085785 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 20:36:31.088386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 20:36:31.090057 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 20:36:31.090204 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 20:36:31.091424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 20:36:31.091545 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 20:36:31.097171 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 20:36:31.097758 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 20:36:31.105781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 20:36:31.121623 ignition[1063]: INFO : Ignition 2.22.0 Jan 23 20:36:31.122321 ignition[1063]: INFO : Stage: umount Jan 23 20:36:31.122927 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 20:36:31.123440 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 20:36:31.125578 ignition[1063]: INFO : umount: umount passed Jan 23 20:36:31.126001 ignition[1063]: INFO : Ignition finished successfully Jan 23 20:36:31.128016 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 20:36:31.128573 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 20:36:31.129958 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 20:36:31.130635 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 20:36:31.131525 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 20:36:31.131570 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 20:36:31.132852 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 20:36:31.132907 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 20:36:31.134400 systemd[1]: Stopped target network.target - Network. Jan 23 20:36:31.135016 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 20:36:31.135060 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 20:36:31.136160 systemd[1]: Stopped target paths.target - Path Units. Jan 23 20:36:31.136870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 20:36:31.136916 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 20:36:31.139572 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 20:36:31.139908 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 20:36:31.140287 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 20:36:31.140322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 20:36:31.140695 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 20:36:31.140747 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 20:36:31.141131 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 20:36:31.141174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 20:36:31.142402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 20:36:31.142441 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 20:36:31.143247 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 20:36:31.144980 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 20:36:31.146833 systemd-networkd[821]: eth0: DHCPv6 lease lost Jan 23 20:36:31.153710 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 20:36:31.153867 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 20:36:31.156750 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 20:36:31.156969 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 20:36:31.157086 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 20:36:31.158985 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 20:36:31.159206 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 20:36:31.159298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 20:36:31.160987 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 20:36:31.161458 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 20:36:31.161500 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 20:36:31.162272 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 20:36:31.162320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 20:36:31.163702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 20:36:31.165174 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 20:36:31.165227 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 20:36:31.166119 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 20:36:31.166162 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:36:31.168817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 20:36:31.168859 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 20:36:31.169314 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 20:36:31.169352 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 20:36:31.170468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 20:36:31.173740 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 20:36:31.173802 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 20:36:31.182469 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 20:36:31.189938 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 20:36:31.190652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 20:36:31.190699 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 20:36:31.191160 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 20:36:31.191188 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 20:36:31.191559 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 20:36:31.191597 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 20:36:31.192708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 20:36:31.192771 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 20:36:31.193561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 20:36:31.193606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 20:36:31.195891 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 20:36:31.200414 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 20:36:31.200466 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 20:36:31.201791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 20:36:31.201833 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 20:36:31.202806 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 20:36:31.202847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:36:31.207052 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 20:36:31.207105 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 20:36:31.207146 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 20:36:31.207477 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 20:36:31.208824 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 20:36:31.212480 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 20:36:31.212573 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 20:36:31.213944 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 20:36:31.215503 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 20:36:31.234013 systemd[1]: Switching root. Jan 23 20:36:31.281141 systemd-journald[208]: Journal stopped Jan 23 20:36:32.417276 systemd-journald[208]: Received SIGTERM from PID 1 (systemd). Jan 23 20:36:32.417353 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 20:36:32.417370 kernel: SELinux: policy capability open_perms=1 Jan 23 20:36:32.417386 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 20:36:32.417398 kernel: SELinux: policy capability always_check_network=0 Jan 23 20:36:32.417411 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 20:36:32.417425 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 20:36:32.417438 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 20:36:32.417450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 20:36:32.417463 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 20:36:32.417475 kernel: audit: type=1403 audit(1769200591.426:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 20:36:32.417503 systemd[1]: Successfully loaded SELinux policy in 52.738ms. Jan 23 20:36:32.417530 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.647ms. Jan 23 20:36:32.417555 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 20:36:32.417569 systemd[1]: Detected virtualization kvm. Jan 23 20:36:32.417582 systemd[1]: Detected architecture x86-64. Jan 23 20:36:32.417595 systemd[1]: Detected first boot. Jan 23 20:36:32.417609 systemd[1]: Hostname set to . Jan 23 20:36:32.417623 systemd[1]: Initializing machine ID from VM UUID. Jan 23 20:36:32.417636 zram_generator::config[1106]: No configuration found. Jan 23 20:36:32.417653 kernel: Guest personality initialized and is inactive Jan 23 20:36:32.417666 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 20:36:32.417680 kernel: Initialized host personality Jan 23 20:36:32.417704 kernel: NET: Registered PF_VSOCK protocol family Jan 23 20:36:32.425770 systemd[1]: Populated /etc with preset unit settings. Jan 23 20:36:32.425802 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 20:36:32.425821 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 20:36:32.425835 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 20:36:32.425848 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 20:36:32.425862 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 20:36:32.425876 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 20:36:32.425889 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 20:36:32.425903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 20:36:32.425916 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 20:36:32.425933 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 20:36:32.425946 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 20:36:32.425961 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 20:36:32.425974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 20:36:32.425989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 20:36:32.426003 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 20:36:32.426020 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 20:36:32.426042 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 20:36:32.426057 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 20:36:32.426070 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 20:36:32.426085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 20:36:32.426098 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 20:36:32.426111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 20:36:32.426125 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 20:36:32.426149 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 20:36:32.426168 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 20:36:32.426181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 20:36:32.426195 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 20:36:32.426209 systemd[1]: Reached target slices.target - Slice Units. Jan 23 20:36:32.426222 systemd[1]: Reached target swap.target - Swaps. Jan 23 20:36:32.426235 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 20:36:32.426249 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 20:36:32.426263 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 20:36:32.426276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 20:36:32.426292 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 20:36:32.426306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 20:36:32.426319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 20:36:32.426332 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 20:36:32.426346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 20:36:32.426359 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 20:36:32.426373 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:32.426386 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 20:36:32.426399 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 20:36:32.426415 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 20:36:32.426429 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 20:36:32.426443 systemd[1]: Reached target machines.target - Containers. Jan 23 20:36:32.426457 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 20:36:32.426471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:36:32.426485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 20:36:32.426499 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 20:36:32.426513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 20:36:32.426534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 20:36:32.426556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 20:36:32.426569 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 20:36:32.426582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 20:36:32.426597 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 20:36:32.426613 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 20:36:32.426626 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 20:36:32.426642 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 20:36:32.426656 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 20:36:32.426669 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:36:32.426683 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 20:36:32.426696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 20:36:32.426710 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 20:36:32.426750 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 20:36:32.426766 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 20:36:32.426783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 20:36:32.426797 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 20:36:32.426811 systemd[1]: Stopped verity-setup.service. Jan 23 20:36:32.426832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:32.426845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 20:36:32.426858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 20:36:32.426872 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 20:36:32.426886 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 20:36:32.426899 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 20:36:32.426914 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 20:36:32.426928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 20:36:32.426944 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 20:36:32.426958 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 20:36:32.426971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 20:36:32.426985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 20:36:32.427003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 20:36:32.427017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 20:36:32.427032 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 20:36:32.427046 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 20:36:32.427059 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 20:36:32.427076 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 20:36:32.427090 kernel: loop: module loaded Jan 23 20:36:32.427103 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 20:36:32.427126 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 20:36:32.427139 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 20:36:32.427151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 20:36:32.427162 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 20:36:32.427175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:36:32.427190 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 20:36:32.427205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 20:36:32.427217 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 20:36:32.427229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:36:32.427241 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 20:36:32.427254 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 20:36:32.427266 kernel: fuse: init (API version 7.41) Jan 23 20:36:32.427310 systemd-journald[1193]: Collecting audit messages is disabled. Jan 23 20:36:32.427338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 20:36:32.427353 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 20:36:32.427367 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 20:36:32.427379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 20:36:32.427392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 20:36:32.427404 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 20:36:32.427416 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 20:36:32.427429 systemd-journald[1193]: Journal started Jan 23 20:36:32.427457 systemd-journald[1193]: Runtime Journal (/run/log/journal/e6a1a00433d141cbb44e98c9bc3ba8c9) is 4.7M, max 37.8M, 33.1M free. Jan 23 20:36:32.051801 systemd[1]: Queued start job for default target multi-user.target. Jan 23 20:36:32.078170 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 20:36:32.078942 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 20:36:32.433737 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 20:36:32.440164 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 20:36:32.463993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 20:36:32.481750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 20:36:32.482263 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 20:36:32.485304 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 20:36:32.486090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:36:32.503731 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 20:36:32.533820 systemd-journald[1193]: Time spent on flushing to /var/log/journal/e6a1a00433d141cbb44e98c9bc3ba8c9 is 107.043ms for 1177 entries. Jan 23 20:36:32.533820 systemd-journald[1193]: System Journal (/var/log/journal/e6a1a00433d141cbb44e98c9bc3ba8c9) is 8M, max 584.8M, 576.8M free. Jan 23 20:36:32.669796 systemd-journald[1193]: Received client request to flush runtime journal. Jan 23 20:36:32.669845 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 20:36:32.669861 kernel: loop1: detected capacity change from 0 to 8 Jan 23 20:36:32.669875 kernel: ACPI: bus type drm_connector registered Jan 23 20:36:32.669889 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 20:36:32.550650 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 20:36:32.557177 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 20:36:32.565127 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 20:36:32.602759 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 20:36:32.672778 kernel: loop3: detected capacity change from 0 to 229808 Jan 23 20:36:32.602961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 20:36:32.646061 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 20:36:32.656016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 20:36:32.667849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 20:36:32.671781 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 20:36:32.697941 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 23 20:36:32.697961 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 23 20:36:32.702470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 20:36:32.705834 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 20:36:32.714807 kernel: loop5: detected capacity change from 0 to 8 Jan 23 20:36:32.717956 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 20:36:32.739736 kernel: loop7: detected capacity change from 0 to 229808 Jan 23 20:36:32.769481 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 23 20:36:32.770378 (sd-merge)[1271]: Merged extensions into '/usr'. Jan 23 20:36:32.779893 systemd[1]: Reload requested from client PID 1217 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 20:36:32.780031 systemd[1]: Reloading... Jan 23 20:36:32.940739 zram_generator::config[1297]: No configuration found. Jan 23 20:36:33.000461 ldconfig[1210]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 20:36:33.184673 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 20:36:33.185075 systemd[1]: Reloading finished in 404 ms. Jan 23 20:36:33.203890 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 20:36:33.210044 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 20:36:33.216971 systemd[1]: Starting ensure-sysext.service... Jan 23 20:36:33.219517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 20:36:33.245327 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Jan 23 20:36:33.245450 systemd[1]: Reloading... Jan 23 20:36:33.253208 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 20:36:33.253431 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 20:36:33.253743 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 20:36:33.254014 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 20:36:33.254840 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 20:36:33.255116 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Jan 23 20:36:33.255185 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Jan 23 20:36:33.258798 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 20:36:33.258810 systemd-tmpfiles[1354]: Skipping /boot Jan 23 20:36:33.266182 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 20:36:33.266196 systemd-tmpfiles[1354]: Skipping /boot Jan 23 20:36:33.319765 zram_generator::config[1377]: No configuration found. Jan 23 20:36:33.516321 systemd[1]: Reloading finished in 270 ms. Jan 23 20:36:33.526425 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 20:36:33.527462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 20:36:33.548374 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 20:36:33.552040 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 20:36:33.555006 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 20:36:33.560052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 20:36:33.563063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 20:36:33.566204 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 20:36:33.570567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:33.570844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:36:33.572971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 20:36:33.578646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 20:36:33.590563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 20:36:33.594179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:36:33.594533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:36:33.594849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:33.603775 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 20:36:33.606830 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:33.607035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:36:33.607196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:36:33.607287 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:36:33.607376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:33.617478 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:33.618521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 20:36:33.631406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 20:36:33.639933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 20:36:33.639983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 20:36:33.640072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 20:36:33.641049 systemd[1]: Finished ensure-sysext.service. Jan 23 20:36:33.649765 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 20:36:33.651786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 20:36:33.652563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 20:36:33.652800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 20:36:33.653590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 20:36:33.653786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 20:36:33.654498 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 20:36:33.654664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 20:36:33.655628 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 20:36:33.655815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 20:36:33.664260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 20:36:33.664384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 20:36:33.667854 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 20:36:33.670885 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 20:36:33.689388 systemd-udevd[1443]: Using default interface naming scheme 'v255'. Jan 23 20:36:33.695050 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 20:36:33.697046 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 20:36:33.709206 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 20:36:33.710095 augenrules[1479]: No rules Jan 23 20:36:33.711355 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 20:36:33.711950 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 20:36:33.729879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 20:36:33.735669 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 20:36:33.758688 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 20:36:33.926229 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 20:36:33.964990 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 20:36:33.965606 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 20:36:33.987651 systemd-networkd[1495]: lo: Link UP Jan 23 20:36:33.987673 systemd-networkd[1495]: lo: Gained carrier Jan 23 20:36:33.989678 systemd-networkd[1495]: Enumeration completed Jan 23 20:36:33.989809 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 20:36:33.992318 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 20:36:33.995582 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 20:36:34.020611 systemd-resolved[1442]: Positive Trust Anchors: Jan 23 20:36:34.026398 systemd-resolved[1442]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 20:36:34.026449 systemd-resolved[1442]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 20:36:34.048622 systemd-resolved[1442]: Using system hostname 'srv-zm8g6.gb1.brightbox.com'. Jan 23 20:36:34.053704 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 20:36:34.054950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 20:36:34.055535 systemd[1]: Reached target network.target - Network. Jan 23 20:36:34.056788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 20:36:34.057243 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 20:36:34.057957 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 20:36:34.058422 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 20:36:34.058861 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 20:36:34.059447 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 20:36:34.059999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 20:36:34.060786 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 20:36:34.061249 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 20:36:34.061286 systemd[1]: Reached target paths.target - Path Units. Jan 23 20:36:34.061747 systemd[1]: Reached target timers.target - Timer Units. Jan 23 20:36:34.063862 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 20:36:34.066163 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 20:36:34.072113 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 20:36:34.073122 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 20:36:34.074264 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 20:36:34.078466 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 20:36:34.080145 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 20:36:34.081342 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 20:36:34.085056 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 20:36:34.085597 systemd[1]: Reached target basic.target - Basic System. Jan 23 20:36:34.086274 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 20:36:34.086311 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 20:36:34.089011 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 20:36:34.092012 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 20:36:34.095953 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 20:36:34.101741 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 20:36:34.104081 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 20:36:34.109960 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 20:36:34.114241 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:34.117692 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:36:34.117704 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 20:36:34.119094 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 20:36:34.119793 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 20:36:34.122237 systemd-networkd[1495]: eth0: Link UP Jan 23 20:36:34.122416 systemd-networkd[1495]: eth0: Gained carrier Jan 23 20:36:34.122448 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 20:36:34.123516 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 20:36:34.126127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 20:36:34.134559 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 20:36:34.143882 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 20:36:34.146757 jq[1535]: false Jan 23 20:36:34.148929 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 20:36:34.153399 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 20:36:34.155916 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 20:36:34.158024 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 20:36:34.164662 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 20:36:34.167550 extend-filesystems[1537]: Found /dev/vda6 Jan 23 20:36:34.168915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 20:36:34.170648 systemd-networkd[1495]: eth0: DHCPv4 address 10.244.93.250/30, gateway 10.244.93.249 acquired from 10.244.93.249 Jan 23 20:36:34.172907 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 23 20:36:34.175265 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 20:36:34.177097 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 20:36:34.184765 extend-filesystems[1537]: Found /dev/vda9 Jan 23 20:36:34.183027 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 20:36:34.183401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 20:36:34.185242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 20:36:34.193348 extend-filesystems[1537]: Checking size of /dev/vda9 Jan 23 20:36:34.202741 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jan 23 20:36:34.201353 oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jan 23 20:36:34.206955 jq[1549]: true Jan 23 20:36:34.221606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 20:36:34.233103 extend-filesystems[1537]: Resized partition /dev/vda9 Jan 23 20:36:34.233642 update_engine[1548]: I20260123 20:36:34.232602 1548 main.cc:92] Flatcar Update Engine starting Jan 23 20:36:34.238958 oslogin_cache_refresh[1539]: Failure getting users, quitting Jan 23 20:36:34.241019 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting Jan 23 20:36:34.241019 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 20:36:34.241019 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache Jan 23 20:36:34.235277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 20:36:34.238983 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 20:36:34.239044 oslogin_cache_refresh[1539]: Refreshing group entry cache Jan 23 20:36:34.243854 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 20:36:34.246886 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting Jan 23 20:36:34.246886 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 20:36:34.246424 oslogin_cache_refresh[1539]: Failure getting groups, quitting Jan 23 20:36:34.246438 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 20:36:34.253893 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 23 20:36:34.252983 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 20:36:34.253252 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 20:36:34.255770 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 20:36:34.271316 jq[1564]: true Jan 23 20:36:34.274758 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 20:36:34.296965 kernel: ACPI: button: Power Button [PWRF] Jan 23 20:36:34.278102 dbus-daemon[1533]: [system] SELinux support is enabled Jan 23 20:36:34.278266 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 20:36:34.289682 dbus-daemon[1533]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1495 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 20:36:34.284242 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 20:36:34.284273 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 20:36:34.285807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 20:36:34.285825 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 20:36:34.295841 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 20:36:34.305749 tar[1553]: linux-amd64/LICENSE Jan 23 20:36:34.305749 tar[1553]: linux-amd64/helm Jan 23 20:36:34.306069 update_engine[1548]: I20260123 20:36:34.301967 1548 update_check_scheduler.cc:74] Next update check in 8m5s Jan 23 20:36:34.299499 systemd[1]: Started update-engine.service - Update Engine. Jan 23 20:36:34.368326 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 20:36:34.374955 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 20:36:34.375277 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 20:36:34.377410 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 20:36:34.392642 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 23 20:36:34.400243 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 20:36:34.400243 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 23 20:36:34.400243 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 23 20:36:34.407270 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Jan 23 20:36:34.401204 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 20:36:34.402797 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 20:36:34.421946 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 20:36:34.423367 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 20:36:34.429125 dbus-daemon[1533]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 20:36:34.432994 dbus-daemon[1533]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1581 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 20:36:34.459851 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 20:36:34.469011 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Jan 23 20:36:34.468691 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 20:36:34.473873 systemd[1]: Starting sshkeys.service... Jan 23 20:36:34.493905 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 20:36:34.495938 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 20:36:34.505382 systemd-logind[1547]: New seat seat0. Jan 23 20:36:34.511903 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 20:36:34.519825 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:34.654342 polkitd[1605]: Started polkitd version 126 Jan 23 20:36:34.659395 polkitd[1605]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 20:36:34.659985 polkitd[1605]: Loading rules from directory /run/polkit-1/rules.d Jan 23 20:36:34.660033 polkitd[1605]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 20:36:34.660332 polkitd[1605]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 20:36:34.660352 polkitd[1605]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 20:36:34.660384 polkitd[1605]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 20:36:34.660916 polkitd[1605]: Finished loading, compiling and executing 2 rules Jan 23 20:36:34.661165 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 20:36:34.679110 dbus-daemon[1533]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 20:36:34.679393 polkitd[1605]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 20:36:34.707740 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 20:36:34.710930 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 20:36:34.735254 systemd-hostnamed[1581]: Hostname set to (static) Jan 23 20:36:34.740189 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 20:36:34.879104 containerd[1575]: time="2026-01-23T20:36:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 20:36:34.879104 containerd[1575]: time="2026-01-23T20:36:34.874788334Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 20:36:34.939405 containerd[1575]: time="2026-01-23T20:36:34.939151530Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.396µs" Jan 23 20:36:34.943884 containerd[1575]: time="2026-01-23T20:36:34.943838344Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 20:36:34.943952 containerd[1575]: time="2026-01-23T20:36:34.943903444Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 20:36:34.944111 containerd[1575]: time="2026-01-23T20:36:34.944090991Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 20:36:34.944163 containerd[1575]: time="2026-01-23T20:36:34.944120841Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 20:36:34.944189 containerd[1575]: time="2026-01-23T20:36:34.944163747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944251 containerd[1575]: time="2026-01-23T20:36:34.944233826Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944251 containerd[1575]: time="2026-01-23T20:36:34.944249718Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944580 containerd[1575]: time="2026-01-23T20:36:34.944555989Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944608 containerd[1575]: time="2026-01-23T20:36:34.944580909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944608 containerd[1575]: time="2026-01-23T20:36:34.944593834Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944608 containerd[1575]: time="2026-01-23T20:36:34.944602458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 20:36:34.944729 containerd[1575]: time="2026-01-23T20:36:34.944693404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 20:36:34.945924 containerd[1575]: time="2026-01-23T20:36:34.944986747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 20:36:34.945924 containerd[1575]: time="2026-01-23T20:36:34.945036718Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 20:36:34.945924 containerd[1575]: time="2026-01-23T20:36:34.945055143Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 20:36:34.945924 containerd[1575]: time="2026-01-23T20:36:34.945096436Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 20:36:34.945924 containerd[1575]: time="2026-01-23T20:36:34.945538626Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 20:36:34.945924 containerd[1575]: time="2026-01-23T20:36:34.945605074Z" level=info msg="metadata content store policy set" policy=shared Jan 23 20:36:34.949581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.949899519Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.949963327Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.949984359Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.949999510Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950014518Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950027192Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950051841Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950077434Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950092964Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950105856Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950117136Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 20:36:34.950187 containerd[1575]: time="2026-01-23T20:36:34.950132579Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950243125Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950269420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950304685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950322286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950334582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950346264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950359663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950372240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950385523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950398249Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950410831Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950480661Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 20:36:34.950513 containerd[1575]: time="2026-01-23T20:36:34.950504578Z" level=info msg="Start snapshots syncer" Jan 23 20:36:34.950810 containerd[1575]: time="2026-01-23T20:36:34.950543726Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 20:36:34.952696 containerd[1575]: time="2026-01-23T20:36:34.950909828Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 20:36:34.952696 containerd[1575]: time="2026-01-23T20:36:34.950978508Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951056259Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951154029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951176159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951191885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951203449Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951221358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951233359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951246447Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951273993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951286848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951310180Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951348746Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951364816Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 20:36:34.952894 containerd[1575]: time="2026-01-23T20:36:34.951375514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951405989Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951418174Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951429687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951449076Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951477673Z" level=info msg="runtime interface created" Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951483609Z" level=info msg="created NRI interface" Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951492742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951506091Z" level=info msg="Connect containerd service" Jan 23 20:36:34.953201 containerd[1575]: time="2026-01-23T20:36:34.951531041Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 20:36:34.955505 containerd[1575]: time="2026-01-23T20:36:34.954523079Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 20:36:35.079187 systemd-logind[1547]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 20:36:35.141867 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 20:36:35.206258 systemd-networkd[1495]: eth0: Gained IPv6LL Jan 23 20:36:35.211447 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 23 20:36:35.213511 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 20:36:35.295310 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 20:36:35.300949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:36:35.367654 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 20:36:35.369076 containerd[1575]: time="2026-01-23T20:36:35.367912707Z" level=info msg="Start subscribing containerd event" Jan 23 20:36:35.369076 containerd[1575]: time="2026-01-23T20:36:35.368015648Z" level=info msg="Start recovering state" Jan 23 20:36:35.369373 containerd[1575]: time="2026-01-23T20:36:35.369273300Z" level=info msg="Start event monitor" Jan 23 20:36:35.369373 containerd[1575]: time="2026-01-23T20:36:35.369322556Z" level=info msg="Start cni network conf syncer for default" Jan 23 20:36:35.369373 containerd[1575]: time="2026-01-23T20:36:35.369333977Z" level=info msg="Start streaming server" Jan 23 20:36:35.369373 containerd[1575]: time="2026-01-23T20:36:35.369354090Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 20:36:35.369585 containerd[1575]: time="2026-01-23T20:36:35.369513758Z" level=info msg="runtime interface starting up..." Jan 23 20:36:35.369585 containerd[1575]: time="2026-01-23T20:36:35.369524949Z" level=info msg="starting plugins..." Jan 23 20:36:35.369585 containerd[1575]: time="2026-01-23T20:36:35.369545425Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 20:36:35.371551 containerd[1575]: time="2026-01-23T20:36:35.371523724Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 20:36:35.371683 containerd[1575]: time="2026-01-23T20:36:35.371671070Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 20:36:35.373531 containerd[1575]: time="2026-01-23T20:36:35.372895176Z" level=info msg="containerd successfully booted in 0.499451s" Jan 23 20:36:35.402421 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 20:36:35.476231 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 20:36:35.489177 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 20:36:35.518179 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 20:36:35.529437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 20:36:35.535043 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 20:36:35.537285 systemd[1]: Started sshd@0-10.244.93.250:22-68.220.241.50:47288.service - OpenSSH per-connection server daemon (68.220.241.50:47288). Jan 23 20:36:35.563643 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 20:36:35.563941 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 20:36:35.567332 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 20:36:35.606364 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 20:36:35.611167 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 20:36:35.614608 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 20:36:35.616032 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 20:36:35.649434 tar[1553]: linux-amd64/README.md Jan 23 20:36:35.667004 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 20:36:36.148647 sshd[1682]: Accepted publickey for core from 68.220.241.50 port 47288 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:36.152236 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:36.166149 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 20:36:36.172866 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 20:36:36.187592 systemd-logind[1547]: New session 1 of user core. Jan 23 20:36:36.209156 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 20:36:36.216545 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 20:36:36.243200 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 20:36:36.250989 systemd-logind[1547]: New session c1 of user core. Jan 23 20:36:36.370797 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:36.372449 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:36.414059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:36:36.415890 systemd[1697]: Queued start job for default target default.target. Jan 23 20:36:36.420895 systemd[1697]: Created slice app.slice - User Application Slice. Jan 23 20:36:36.420923 systemd[1697]: Reached target paths.target - Paths. Jan 23 20:36:36.420963 systemd[1697]: Reached target timers.target - Timers. Jan 23 20:36:36.422515 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 20:36:36.423454 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:36:36.433447 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 20:36:36.434245 systemd[1697]: Reached target sockets.target - Sockets. Jan 23 20:36:36.434294 systemd[1697]: Reached target basic.target - Basic System. Jan 23 20:36:36.434328 systemd[1697]: Reached target default.target - Main User Target. Jan 23 20:36:36.434354 systemd[1697]: Startup finished in 171ms. Jan 23 20:36:36.434780 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 20:36:36.446033 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 20:36:36.717313 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 23 20:36:36.720594 systemd-networkd[1495]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:177e:24:19ff:fef4:5dfa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:177e:24:19ff:fef4:5dfa/64 assigned by NDisc. Jan 23 20:36:36.720850 systemd-networkd[1495]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 20:36:36.872151 systemd[1]: Started sshd@1-10.244.93.250:22-68.220.241.50:55068.service - OpenSSH per-connection server daemon (68.220.241.50:55068). Jan 23 20:36:36.998852 kubelet[1710]: E0123 20:36:36.998625 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:36:37.005656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:36:37.006114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:36:37.007121 systemd[1]: kubelet.service: Consumed 1.082s CPU time, 268.6M memory peak. Jan 23 20:36:37.480432 sshd[1721]: Accepted publickey for core from 68.220.241.50 port 55068 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:37.483979 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:37.497623 systemd-logind[1547]: New session 2 of user core. Jan 23 20:36:37.509927 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 20:36:37.893795 sshd[1726]: Connection closed by 68.220.241.50 port 55068 Jan 23 20:36:37.896271 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:37.906461 systemd[1]: sshd@1-10.244.93.250:22-68.220.241.50:55068.service: Deactivated successfully. Jan 23 20:36:37.910355 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 20:36:37.913893 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Jan 23 20:36:37.916460 systemd-logind[1547]: Removed session 2. Jan 23 20:36:37.964706 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 23 20:36:38.005884 systemd[1]: Started sshd@2-10.244.93.250:22-68.220.241.50:55078.service - OpenSSH per-connection server daemon (68.220.241.50:55078). Jan 23 20:36:38.391768 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:38.405781 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:38.612910 sshd[1732]: Accepted publickey for core from 68.220.241.50 port 55078 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:38.616238 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:38.628992 systemd-logind[1547]: New session 3 of user core. Jan 23 20:36:38.643028 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 20:36:39.023759 sshd[1737]: Connection closed by 68.220.241.50 port 55078 Jan 23 20:36:39.025022 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:39.033780 systemd[1]: sshd@2-10.244.93.250:22-68.220.241.50:55078.service: Deactivated successfully. Jan 23 20:36:39.039115 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 20:36:39.040344 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Jan 23 20:36:39.042273 systemd-logind[1547]: Removed session 3. Jan 23 20:36:40.687854 login[1690]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 20:36:40.692338 login[1691]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 20:36:40.694592 systemd-logind[1547]: New session 4 of user core. Jan 23 20:36:40.704066 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 20:36:40.706956 systemd-logind[1547]: New session 5 of user core. Jan 23 20:36:40.712052 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 20:36:42.419776 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:42.419947 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 20:36:42.435398 coreos-metadata[1532]: Jan 23 20:36:42.435 WARN failed to locate config-drive, using the metadata service API instead Jan 23 20:36:42.440024 coreos-metadata[1608]: Jan 23 20:36:42.439 WARN failed to locate config-drive, using the metadata service API instead Jan 23 20:36:42.456156 coreos-metadata[1608]: Jan 23 20:36:42.456 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 20:36:42.456285 coreos-metadata[1532]: Jan 23 20:36:42.456 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 20:36:42.464001 coreos-metadata[1532]: Jan 23 20:36:42.463 INFO Fetch failed with 404: resource not found Jan 23 20:36:42.464001 coreos-metadata[1532]: Jan 23 20:36:42.463 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 20:36:42.465440 coreos-metadata[1532]: Jan 23 20:36:42.465 INFO Fetch successful Jan 23 20:36:42.465976 coreos-metadata[1532]: Jan 23 20:36:42.465 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 20:36:42.480319 coreos-metadata[1532]: Jan 23 20:36:42.480 INFO Fetch successful Jan 23 20:36:42.480587 coreos-metadata[1532]: Jan 23 20:36:42.480 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 20:36:42.481792 coreos-metadata[1608]: Jan 23 20:36:42.481 INFO Fetch successful Jan 23 20:36:42.481900 coreos-metadata[1608]: Jan 23 20:36:42.481 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 20:36:42.496422 coreos-metadata[1532]: Jan 23 20:36:42.496 INFO Fetch successful Jan 23 20:36:42.496781 coreos-metadata[1532]: Jan 23 20:36:42.496 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 20:36:42.511648 coreos-metadata[1608]: Jan 23 20:36:42.511 INFO Fetch successful Jan 23 20:36:42.515448 coreos-metadata[1532]: Jan 23 20:36:42.515 INFO Fetch successful Jan 23 20:36:42.515448 coreos-metadata[1532]: Jan 23 20:36:42.515 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 20:36:42.516644 unknown[1608]: wrote ssh authorized keys file for user: core Jan 23 20:36:42.532637 coreos-metadata[1532]: Jan 23 20:36:42.532 INFO Fetch successful Jan 23 20:36:42.547269 update-ssh-keys[1772]: Updated "/home/core/.ssh/authorized_keys" Jan 23 20:36:42.550276 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 20:36:42.553299 systemd[1]: Finished sshkeys.service. Jan 23 20:36:42.572039 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 20:36:42.572686 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 20:36:42.574006 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 20:36:42.574427 systemd[1]: Startup finished in 3.192s (kernel) + 13.800s (initrd) + 11.199s (userspace) = 28.192s. Jan 23 20:36:47.168746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 20:36:47.173090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:36:47.364112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:36:47.371179 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:36:47.422865 kubelet[1788]: E0123 20:36:47.422735 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:36:47.426865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:36:47.427020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:36:47.427693 systemd[1]: kubelet.service: Consumed 208ms CPU time, 110.5M memory peak. Jan 23 20:36:49.132631 systemd[1]: Started sshd@3-10.244.93.250:22-68.220.241.50:36794.service - OpenSSH per-connection server daemon (68.220.241.50:36794). Jan 23 20:36:49.740330 sshd[1796]: Accepted publickey for core from 68.220.241.50 port 36794 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:49.743940 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:49.755791 systemd-logind[1547]: New session 6 of user core. Jan 23 20:36:49.762865 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 20:36:50.155242 sshd[1799]: Connection closed by 68.220.241.50 port 36794 Jan 23 20:36:50.155657 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:50.164926 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Jan 23 20:36:50.166092 systemd[1]: sshd@3-10.244.93.250:22-68.220.241.50:36794.service: Deactivated successfully. Jan 23 20:36:50.169062 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 20:36:50.173020 systemd-logind[1547]: Removed session 6. Jan 23 20:36:50.265898 systemd[1]: Started sshd@4-10.244.93.250:22-68.220.241.50:36802.service - OpenSSH per-connection server daemon (68.220.241.50:36802). Jan 23 20:36:50.869168 sshd[1805]: Accepted publickey for core from 68.220.241.50 port 36802 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:50.872486 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:50.886187 systemd-logind[1547]: New session 7 of user core. Jan 23 20:36:50.893010 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 20:36:51.271248 sshd[1808]: Connection closed by 68.220.241.50 port 36802 Jan 23 20:36:51.272668 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:51.281696 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Jan 23 20:36:51.281995 systemd[1]: sshd@4-10.244.93.250:22-68.220.241.50:36802.service: Deactivated successfully. Jan 23 20:36:51.285499 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 20:36:51.288651 systemd-logind[1547]: Removed session 7. Jan 23 20:36:51.382920 systemd[1]: Started sshd@5-10.244.93.250:22-68.220.241.50:36804.service - OpenSSH per-connection server daemon (68.220.241.50:36804). Jan 23 20:36:51.981351 sshd[1814]: Accepted publickey for core from 68.220.241.50 port 36804 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:51.983895 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:51.993790 systemd-logind[1547]: New session 8 of user core. Jan 23 20:36:51.999901 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 20:36:52.392161 sshd[1817]: Connection closed by 68.220.241.50 port 36804 Jan 23 20:36:52.393744 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:52.403558 systemd[1]: sshd@5-10.244.93.250:22-68.220.241.50:36804.service: Deactivated successfully. Jan 23 20:36:52.406696 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 20:36:52.408795 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Jan 23 20:36:52.410312 systemd-logind[1547]: Removed session 8. Jan 23 20:36:52.503272 systemd[1]: Started sshd@6-10.244.93.250:22-68.220.241.50:33906.service - OpenSSH per-connection server daemon (68.220.241.50:33906). Jan 23 20:36:53.084782 sshd[1823]: Accepted publickey for core from 68.220.241.50 port 33906 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:53.087069 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:53.098301 systemd-logind[1547]: New session 9 of user core. Jan 23 20:36:53.110141 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 20:36:53.417903 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 20:36:53.418682 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:36:53.438351 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 23 20:36:53.527979 sshd[1826]: Connection closed by 68.220.241.50 port 33906 Jan 23 20:36:53.529802 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:53.541596 systemd[1]: sshd@6-10.244.93.250:22-68.220.241.50:33906.service: Deactivated successfully. Jan 23 20:36:53.545229 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 20:36:53.546478 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Jan 23 20:36:53.549131 systemd-logind[1547]: Removed session 9. Jan 23 20:36:53.647103 systemd[1]: Started sshd@7-10.244.93.250:22-68.220.241.50:33918.service - OpenSSH per-connection server daemon (68.220.241.50:33918). Jan 23 20:36:54.232992 sshd[1833]: Accepted publickey for core from 68.220.241.50 port 33918 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:54.236255 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:54.246822 systemd-logind[1547]: New session 10 of user core. Jan 23 20:36:54.253866 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 20:36:54.555524 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 20:36:54.555918 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:36:54.565113 sudo[1838]: pam_unix(sudo:session): session closed for user root Jan 23 20:36:54.574050 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 20:36:54.574286 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:36:54.585842 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 20:36:54.652398 augenrules[1860]: No rules Jan 23 20:36:54.653607 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 20:36:54.653982 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 20:36:54.655012 sudo[1837]: pam_unix(sudo:session): session closed for user root Jan 23 20:36:54.745935 sshd[1836]: Connection closed by 68.220.241.50 port 33918 Jan 23 20:36:54.745698 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 23 20:36:54.752977 systemd[1]: sshd@7-10.244.93.250:22-68.220.241.50:33918.service: Deactivated successfully. Jan 23 20:36:54.756025 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 20:36:54.757480 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Jan 23 20:36:54.760371 systemd-logind[1547]: Removed session 10. Jan 23 20:36:54.852236 systemd[1]: Started sshd@8-10.244.93.250:22-68.220.241.50:33922.service - OpenSSH per-connection server daemon (68.220.241.50:33922). Jan 23 20:36:55.439836 sshd[1869]: Accepted publickey for core from 68.220.241.50 port 33922 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:36:55.442803 sshd-session[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:36:55.454248 systemd-logind[1547]: New session 11 of user core. Jan 23 20:36:55.460926 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 20:36:55.783995 sudo[1873]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 20:36:55.784508 sudo[1873]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 20:36:56.187831 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 20:36:56.202244 (dockerd)[1891]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 20:36:56.540002 dockerd[1891]: time="2026-01-23T20:36:56.539462076Z" level=info msg="Starting up" Jan 23 20:36:56.541468 dockerd[1891]: time="2026-01-23T20:36:56.541405032Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 20:36:56.561635 dockerd[1891]: time="2026-01-23T20:36:56.561543795Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 20:36:56.616750 dockerd[1891]: time="2026-01-23T20:36:56.616338651Z" level=info msg="Loading containers: start." Jan 23 20:36:56.628982 kernel: Initializing XFRM netlink socket Jan 23 20:36:56.877890 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 23 20:36:56.919562 systemd-networkd[1495]: docker0: Link UP Jan 23 20:36:56.922280 dockerd[1891]: time="2026-01-23T20:36:56.922243968Z" level=info msg="Loading containers: done." Jan 23 20:36:56.939215 dockerd[1891]: time="2026-01-23T20:36:56.938868408Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 20:36:56.939215 dockerd[1891]: time="2026-01-23T20:36:56.938953197Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 20:36:56.939215 dockerd[1891]: time="2026-01-23T20:36:56.939032960Z" level=info msg="Initializing buildkit" Jan 23 20:36:56.975612 dockerd[1891]: time="2026-01-23T20:36:56.975501547Z" level=info msg="Completed buildkit initialization" Jan 23 20:36:56.982706 dockerd[1891]: time="2026-01-23T20:36:56.982538769Z" level=info msg="Daemon has completed initialization" Jan 23 20:36:56.982706 dockerd[1891]: time="2026-01-23T20:36:56.982611384Z" level=info msg="API listen on /run/docker.sock" Jan 23 20:36:56.983338 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 20:36:58.302016 systemd-resolved[1442]: Clock change detected. Flushing caches. Jan 23 20:36:58.303329 systemd-timesyncd[1472]: Contacted time server [2a01:7e00::f03c:94ff:fee2:9c69]:123 (2.flatcar.pool.ntp.org). Jan 23 20:36:58.303457 systemd-timesyncd[1472]: Initial clock synchronization to Fri 2026-01-23 20:36:58.301659 UTC. Jan 23 20:36:58.587236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 20:36:58.590997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:36:58.787908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:36:58.802632 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:36:58.858194 kubelet[2111]: E0123 20:36:58.858020 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:36:58.861180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:36:58.861367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:36:58.862059 systemd[1]: kubelet.service: Consumed 210ms CPU time, 110.8M memory peak. Jan 23 20:36:59.052026 containerd[1575]: time="2026-01-23T20:36:59.051934214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 20:36:59.702856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549233179.mount: Deactivated successfully. Jan 23 20:37:01.052004 containerd[1575]: time="2026-01-23T20:37:01.051777836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:01.052734 containerd[1575]: time="2026-01-23T20:37:01.052701056Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 23 20:37:01.054350 containerd[1575]: time="2026-01-23T20:37:01.054097599Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:01.057599 containerd[1575]: time="2026-01-23T20:37:01.057497218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:01.059153 containerd[1575]: time="2026-01-23T20:37:01.059090709Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.007047314s" Jan 23 20:37:01.059469 containerd[1575]: time="2026-01-23T20:37:01.059413052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 20:37:01.060466 containerd[1575]: time="2026-01-23T20:37:01.060408689Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 20:37:02.914325 containerd[1575]: time="2026-01-23T20:37:02.913903315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:02.915051 containerd[1575]: time="2026-01-23T20:37:02.915015850Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 23 20:37:02.915499 containerd[1575]: time="2026-01-23T20:37:02.915470588Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:02.918250 containerd[1575]: time="2026-01-23T20:37:02.917921357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:02.919058 containerd[1575]: time="2026-01-23T20:37:02.918822093Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.858375172s" Jan 23 20:37:02.919058 containerd[1575]: time="2026-01-23T20:37:02.918862789Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 20:37:02.919568 containerd[1575]: time="2026-01-23T20:37:02.919538704Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 20:37:04.256565 containerd[1575]: time="2026-01-23T20:37:04.256471868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:04.258772 containerd[1575]: time="2026-01-23T20:37:04.258106410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 23 20:37:04.259575 containerd[1575]: time="2026-01-23T20:37:04.259519017Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:04.264320 containerd[1575]: time="2026-01-23T20:37:04.264235592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:04.267052 containerd[1575]: time="2026-01-23T20:37:04.266993239Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.347343535s" Jan 23 20:37:04.267241 containerd[1575]: time="2026-01-23T20:37:04.267225601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 20:37:04.269169 containerd[1575]: time="2026-01-23T20:37:04.269134455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 20:37:06.057553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697827049.mount: Deactivated successfully. Jan 23 20:37:06.595050 containerd[1575]: time="2026-01-23T20:37:06.594449194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:06.596035 containerd[1575]: time="2026-01-23T20:37:06.596011663Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 23 20:37:06.596614 containerd[1575]: time="2026-01-23T20:37:06.596591887Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:06.598787 containerd[1575]: time="2026-01-23T20:37:06.598757279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:06.600596 containerd[1575]: time="2026-01-23T20:37:06.600571190Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.331172163s" Jan 23 20:37:06.600660 containerd[1575]: time="2026-01-23T20:37:06.600605971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 20:37:06.602310 containerd[1575]: time="2026-01-23T20:37:06.602120746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 20:37:07.214581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395380846.mount: Deactivated successfully. Jan 23 20:37:07.683591 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 20:37:08.229998 containerd[1575]: time="2026-01-23T20:37:08.229909995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:08.231558 containerd[1575]: time="2026-01-23T20:37:08.231512874Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 23 20:37:08.233041 containerd[1575]: time="2026-01-23T20:37:08.232330260Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:08.235302 containerd[1575]: time="2026-01-23T20:37:08.235238097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:08.236137 containerd[1575]: time="2026-01-23T20:37:08.236102890Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.633948544s" Jan 23 20:37:08.236229 containerd[1575]: time="2026-01-23T20:37:08.236151676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 20:37:08.237049 containerd[1575]: time="2026-01-23T20:37:08.237003401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 20:37:09.053388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 20:37:09.056451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:37:09.071151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675472207.mount: Deactivated successfully. Jan 23 20:37:09.077045 containerd[1575]: time="2026-01-23T20:37:09.076380396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 20:37:09.078996 containerd[1575]: time="2026-01-23T20:37:09.078964703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 20:37:09.079469 containerd[1575]: time="2026-01-23T20:37:09.079440992Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 20:37:09.081589 containerd[1575]: time="2026-01-23T20:37:09.081562767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 20:37:09.082839 containerd[1575]: time="2026-01-23T20:37:09.082813545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 845.776967ms" Jan 23 20:37:09.082953 containerd[1575]: time="2026-01-23T20:37:09.082938688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 20:37:09.083845 containerd[1575]: time="2026-01-23T20:37:09.083827001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 20:37:09.244771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:37:09.265684 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 20:37:09.331693 kubelet[2260]: E0123 20:37:09.331420 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 20:37:09.335682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 20:37:09.335903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 20:37:09.336765 systemd[1]: kubelet.service: Consumed 219ms CPU time, 109.7M memory peak. Jan 23 20:37:09.708307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859563539.mount: Deactivated successfully. Jan 23 20:37:14.095787 containerd[1575]: time="2026-01-23T20:37:14.095641912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:14.098067 containerd[1575]: time="2026-01-23T20:37:14.097279984Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 23 20:37:14.099092 containerd[1575]: time="2026-01-23T20:37:14.099055890Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:14.104331 containerd[1575]: time="2026-01-23T20:37:14.104300143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:14.105898 containerd[1575]: time="2026-01-23T20:37:14.105865632Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.022012611s" Jan 23 20:37:14.106002 containerd[1575]: time="2026-01-23T20:37:14.105901197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 20:37:18.746738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:37:18.746924 systemd[1]: kubelet.service: Consumed 219ms CPU time, 109.7M memory peak. Jan 23 20:37:18.749445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:37:18.780579 systemd[1]: Reload requested from client PID 2352 ('systemctl') (unit session-11.scope)... Jan 23 20:37:18.780783 systemd[1]: Reloading... Jan 23 20:37:18.928294 zram_generator::config[2406]: No configuration found. Jan 23 20:37:19.172169 systemd[1]: Reloading finished in 390 ms. Jan 23 20:37:19.245676 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 20:37:19.245767 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 20:37:19.246084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:37:19.246130 systemd[1]: kubelet.service: Consumed 136ms CPU time, 98.5M memory peak. Jan 23 20:37:19.247652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:37:19.416938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:37:19.426607 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 20:37:19.490835 kubelet[2464]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:37:19.491211 kubelet[2464]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 20:37:19.491263 kubelet[2464]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:37:19.493318 kubelet[2464]: I0123 20:37:19.493279 2464 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 20:37:20.042669 kubelet[2464]: I0123 20:37:20.042626 2464 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 20:37:20.044285 kubelet[2464]: I0123 20:37:20.042905 2464 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 20:37:20.044285 kubelet[2464]: I0123 20:37:20.043211 2464 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 20:37:20.083382 kubelet[2464]: I0123 20:37:20.083353 2464 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 20:37:20.085477 kubelet[2464]: E0123 20:37:20.085446 2464 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.93.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 20:37:20.106983 kubelet[2464]: I0123 20:37:20.106961 2464 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 20:37:20.115014 kubelet[2464]: I0123 20:37:20.114994 2464 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 20:37:20.117992 kubelet[2464]: I0123 20:37:20.117961 2464 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 20:37:20.120910 kubelet[2464]: I0123 20:37:20.118093 2464 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-zm8g6.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 20:37:20.121156 kubelet[2464]: I0123 20:37:20.121145 2464 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 20:37:20.121210 kubelet[2464]: I0123 20:37:20.121204 2464 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 20:37:20.122103 kubelet[2464]: I0123 20:37:20.122086 2464 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:37:20.124471 kubelet[2464]: I0123 20:37:20.124446 2464 kubelet.go:480] "Attempting to sync node with API server" Jan 23 20:37:20.124572 kubelet[2464]: I0123 20:37:20.124563 2464 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 20:37:20.124654 kubelet[2464]: I0123 20:37:20.124648 2464 kubelet.go:386] "Adding apiserver pod source" Jan 23 20:37:20.129761 kubelet[2464]: I0123 20:37:20.129747 2464 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 20:37:20.146089 kubelet[2464]: E0123 20:37:20.145642 2464 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.93.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zm8g6.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 20:37:20.146089 kubelet[2464]: I0123 20:37:20.146010 2464 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 20:37:20.146564 kubelet[2464]: E0123 20:37:20.146541 2464 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.93.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 20:37:20.146719 kubelet[2464]: I0123 20:37:20.146559 2464 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 20:37:20.147465 kubelet[2464]: W0123 20:37:20.147446 2464 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 20:37:20.152297 kubelet[2464]: I0123 20:37:20.152223 2464 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 20:37:20.152402 kubelet[2464]: I0123 20:37:20.152394 2464 server.go:1289] "Started kubelet" Jan 23 20:37:20.156030 kubelet[2464]: I0123 20:37:20.156014 2464 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 20:37:20.159824 kubelet[2464]: E0123 20:37:20.157242 2464 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.93.250:6443/api/v1/namespaces/default/events\": dial tcp 10.244.93.250:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-zm8g6.gb1.brightbox.com.188d769bccab0a8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-zm8g6.gb1.brightbox.com,UID:srv-zm8g6.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-zm8g6.gb1.brightbox.com,},FirstTimestamp:2026-01-23 20:37:20.15224283 +0000 UTC m=+0.718659590,LastTimestamp:2026-01-23 20:37:20.15224283 +0000 UTC m=+0.718659590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-zm8g6.gb1.brightbox.com,}" Jan 23 20:37:20.160251 kubelet[2464]: I0123 20:37:20.160228 2464 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 20:37:20.165426 kubelet[2464]: I0123 20:37:20.164692 2464 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 20:37:20.165426 kubelet[2464]: E0123 20:37:20.165022 2464 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" Jan 23 20:37:20.169017 kubelet[2464]: I0123 20:37:20.169003 2464 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 20:37:20.169149 kubelet[2464]: I0123 20:37:20.169141 2464 reconciler.go:26] "Reconciler: start to sync state" Jan 23 20:37:20.172529 kubelet[2464]: E0123 20:37:20.172505 2464 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.93.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 20:37:20.175082 kubelet[2464]: I0123 20:37:20.173952 2464 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 20:37:20.175496 kubelet[2464]: I0123 20:37:20.174924 2464 factory.go:223] Registration of the systemd container factory successfully Jan 23 20:37:20.175724 kubelet[2464]: I0123 20:37:20.175705 2464 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 20:37:20.176018 kubelet[2464]: I0123 20:37:20.175061 2464 server.go:317] "Adding debug handlers to kubelet server" Jan 23 20:37:20.177037 kubelet[2464]: I0123 20:37:20.177013 2464 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 20:37:20.178235 kubelet[2464]: E0123 20:37:20.178071 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zm8g6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.250:6443: connect: connection refused" interval="200ms" Jan 23 20:37:20.179661 kubelet[2464]: I0123 20:37:20.179646 2464 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 20:37:20.182676 kubelet[2464]: I0123 20:37:20.182655 2464 factory.go:223] Registration of the containerd container factory successfully Jan 23 20:37:20.183297 kubelet[2464]: E0123 20:37:20.183277 2464 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 20:37:20.207662 kubelet[2464]: I0123 20:37:20.207156 2464 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 20:37:20.209989 kubelet[2464]: I0123 20:37:20.208167 2464 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 20:37:20.209989 kubelet[2464]: I0123 20:37:20.208201 2464 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 20:37:20.209989 kubelet[2464]: I0123 20:37:20.208225 2464 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 20:37:20.209989 kubelet[2464]: I0123 20:37:20.208233 2464 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 20:37:20.209989 kubelet[2464]: E0123 20:37:20.208376 2464 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 20:37:20.215238 kubelet[2464]: E0123 20:37:20.215214 2464 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.93.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 20:37:20.215661 kubelet[2464]: I0123 20:37:20.215422 2464 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 20:37:20.215661 kubelet[2464]: I0123 20:37:20.215438 2464 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 20:37:20.215661 kubelet[2464]: I0123 20:37:20.215457 2464 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:37:20.216656 kubelet[2464]: I0123 20:37:20.216642 2464 policy_none.go:49] "None policy: Start" Jan 23 20:37:20.216743 kubelet[2464]: I0123 20:37:20.216736 2464 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 20:37:20.216799 kubelet[2464]: I0123 20:37:20.216793 2464 state_mem.go:35] "Initializing new in-memory state store" Jan 23 20:37:20.228175 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 20:37:20.241096 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 20:37:20.246717 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 20:37:20.255437 kubelet[2464]: E0123 20:37:20.255146 2464 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 20:37:20.256237 kubelet[2464]: I0123 20:37:20.256223 2464 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 20:37:20.256799 kubelet[2464]: I0123 20:37:20.256393 2464 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 20:37:20.257730 kubelet[2464]: I0123 20:37:20.257717 2464 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 20:37:20.258487 kubelet[2464]: E0123 20:37:20.258301 2464 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 20:37:20.258487 kubelet[2464]: E0123 20:37:20.258353 2464 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-zm8g6.gb1.brightbox.com\" not found" Jan 23 20:37:20.303329 update_engine[1548]: I20260123 20:37:20.300623 1548 update_attempter.cc:509] Updating boot flags... Jan 23 20:37:20.329809 systemd[1]: Created slice kubepods-burstable-podfea85c7de01f4e6076cae7d607872dee.slice - libcontainer container kubepods-burstable-podfea85c7de01f4e6076cae7d607872dee.slice. Jan 23 20:37:20.341362 kubelet[2464]: E0123 20:37:20.341170 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.344789 systemd[1]: Created slice kubepods-burstable-podfeb77ccb1142fa8b0c6a699106052bd7.slice - libcontainer container kubepods-burstable-podfeb77ccb1142fa8b0c6a699106052bd7.slice. Jan 23 20:37:20.352101 kubelet[2464]: E0123 20:37:20.352081 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.354541 systemd[1]: Created slice kubepods-burstable-pod6a18d9620a5ebb182ce23cd2aef20fb1.slice - libcontainer container kubepods-burstable-pod6a18d9620a5ebb182ce23cd2aef20fb1.slice. Jan 23 20:37:20.357953 kubelet[2464]: E0123 20:37:20.357923 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.360533 kubelet[2464]: I0123 20:37:20.360504 2464 kubelet_node_status.go:75] "Attempting to register node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.363635 kubelet[2464]: E0123 20:37:20.363600 2464 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.93.250:6443/api/v1/nodes\": dial tcp 10.244.93.250:6443: connect: connection refused" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.382789 kubelet[2464]: E0123 20:37:20.382697 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zm8g6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.250:6443: connect: connection refused" interval="400ms" Jan 23 20:37:20.470456 kubelet[2464]: I0123 20:37:20.470427 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.470755 kubelet[2464]: I0123 20:37:20.470738 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a18d9620a5ebb182ce23cd2aef20fb1-kubeconfig\") pod \"kube-scheduler-srv-zm8g6.gb1.brightbox.com\" (UID: \"6a18d9620a5ebb182ce23cd2aef20fb1\") " pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.470855 kubelet[2464]: I0123 20:37:20.470846 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fea85c7de01f4e6076cae7d607872dee-ca-certs\") pod \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" (UID: \"fea85c7de01f4e6076cae7d607872dee\") " pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.470942 kubelet[2464]: I0123 20:37:20.470933 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fea85c7de01f4e6076cae7d607872dee-usr-share-ca-certificates\") pod \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" (UID: \"fea85c7de01f4e6076cae7d607872dee\") " pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.471035 kubelet[2464]: I0123 20:37:20.471024 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-k8s-certs\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.471110 kubelet[2464]: I0123 20:37:20.471102 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-kubeconfig\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.471184 kubelet[2464]: I0123 20:37:20.471176 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fea85c7de01f4e6076cae7d607872dee-k8s-certs\") pod \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" (UID: \"fea85c7de01f4e6076cae7d607872dee\") " pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.471254 kubelet[2464]: I0123 20:37:20.471246 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-ca-certs\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.471375 kubelet[2464]: I0123 20:37:20.471365 2464 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-flexvolume-dir\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.568376 kubelet[2464]: I0123 20:37:20.568103 2464 kubelet_node_status.go:75] "Attempting to register node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.569106 kubelet[2464]: E0123 20:37:20.568850 2464 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.93.250:6443/api/v1/nodes\": dial tcp 10.244.93.250:6443: connect: connection refused" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.644849 containerd[1575]: time="2026-01-23T20:37:20.644335112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-zm8g6.gb1.brightbox.com,Uid:fea85c7de01f4e6076cae7d607872dee,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:20.660662 containerd[1575]: time="2026-01-23T20:37:20.660624997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-zm8g6.gb1.brightbox.com,Uid:feb77ccb1142fa8b0c6a699106052bd7,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:20.661269 containerd[1575]: time="2026-01-23T20:37:20.661057924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-zm8g6.gb1.brightbox.com,Uid:6a18d9620a5ebb182ce23cd2aef20fb1,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:20.760719 containerd[1575]: time="2026-01-23T20:37:20.760662048Z" level=info msg="connecting to shim b1bd610ae3ee282db0cec75ec01e3e1e0e38ee8431a54a86b623513c3e3c6239" address="unix:///run/containerd/s/2c7c1a8f408856464187735ecf19de65063e3e4d9e6ad3b2788d96175c088579" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:20.760968 containerd[1575]: time="2026-01-23T20:37:20.760664141Z" level=info msg="connecting to shim c556ec72de8c57aa01e4977dcef2e8bc5e60a0bbef422e8dce9625d0807df715" address="unix:///run/containerd/s/33cd6e51e16018ebdfcb9e0c87ce4af8562a75b94a5e5eb4b9b0314988deffb4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:20.767082 containerd[1575]: time="2026-01-23T20:37:20.767044198Z" level=info msg="connecting to shim ea79f310a1a46d8a7df36a1a806b34c412fba2fca1b454ad237c00350b0825b5" address="unix:///run/containerd/s/d26ba0af33745753cada0e5985dfc1a0ccfc16bbb736a8b36a65dbe72b0f93c6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:20.786765 kubelet[2464]: E0123 20:37:20.785790 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-zm8g6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.250:6443: connect: connection refused" interval="800ms" Jan 23 20:37:20.861416 systemd[1]: Started cri-containerd-b1bd610ae3ee282db0cec75ec01e3e1e0e38ee8431a54a86b623513c3e3c6239.scope - libcontainer container b1bd610ae3ee282db0cec75ec01e3e1e0e38ee8431a54a86b623513c3e3c6239. Jan 23 20:37:20.863461 systemd[1]: Started cri-containerd-c556ec72de8c57aa01e4977dcef2e8bc5e60a0bbef422e8dce9625d0807df715.scope - libcontainer container c556ec72de8c57aa01e4977dcef2e8bc5e60a0bbef422e8dce9625d0807df715. Jan 23 20:37:20.865447 systemd[1]: Started cri-containerd-ea79f310a1a46d8a7df36a1a806b34c412fba2fca1b454ad237c00350b0825b5.scope - libcontainer container ea79f310a1a46d8a7df36a1a806b34c412fba2fca1b454ad237c00350b0825b5. Jan 23 20:37:20.958033 containerd[1575]: time="2026-01-23T20:37:20.957999507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-zm8g6.gb1.brightbox.com,Uid:fea85c7de01f4e6076cae7d607872dee,Namespace:kube-system,Attempt:0,} returns sandbox id \"c556ec72de8c57aa01e4977dcef2e8bc5e60a0bbef422e8dce9625d0807df715\"" Jan 23 20:37:20.958440 containerd[1575]: time="2026-01-23T20:37:20.958002247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-zm8g6.gb1.brightbox.com,Uid:feb77ccb1142fa8b0c6a699106052bd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1bd610ae3ee282db0cec75ec01e3e1e0e38ee8431a54a86b623513c3e3c6239\"" Jan 23 20:37:20.973488 kubelet[2464]: I0123 20:37:20.973445 2464 kubelet_node_status.go:75] "Attempting to register node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.973856 kubelet[2464]: E0123 20:37:20.973716 2464 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.93.250:6443/api/v1/nodes\": dial tcp 10.244.93.250:6443: connect: connection refused" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:20.974159 containerd[1575]: time="2026-01-23T20:37:20.974123591Z" level=info msg="CreateContainer within sandbox \"c556ec72de8c57aa01e4977dcef2e8bc5e60a0bbef422e8dce9625d0807df715\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 20:37:20.975972 containerd[1575]: time="2026-01-23T20:37:20.975947963Z" level=info msg="CreateContainer within sandbox \"b1bd610ae3ee282db0cec75ec01e3e1e0e38ee8431a54a86b623513c3e3c6239\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 20:37:20.989205 containerd[1575]: time="2026-01-23T20:37:20.989180692Z" level=info msg="Container b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:20.992339 containerd[1575]: time="2026-01-23T20:37:20.992136720Z" level=info msg="Container d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:20.993597 containerd[1575]: time="2026-01-23T20:37:20.993573781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-zm8g6.gb1.brightbox.com,Uid:6a18d9620a5ebb182ce23cd2aef20fb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea79f310a1a46d8a7df36a1a806b34c412fba2fca1b454ad237c00350b0825b5\"" Jan 23 20:37:20.996548 containerd[1575]: time="2026-01-23T20:37:20.996524627Z" level=info msg="CreateContainer within sandbox \"ea79f310a1a46d8a7df36a1a806b34c412fba2fca1b454ad237c00350b0825b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 20:37:21.002031 containerd[1575]: time="2026-01-23T20:37:21.001427274Z" level=info msg="Container dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:21.009658 containerd[1575]: time="2026-01-23T20:37:21.009634726Z" level=info msg="CreateContainer within sandbox \"b1bd610ae3ee282db0cec75ec01e3e1e0e38ee8431a54a86b623513c3e3c6239\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f\"" Jan 23 20:37:21.010588 containerd[1575]: time="2026-01-23T20:37:21.010562082Z" level=info msg="CreateContainer within sandbox \"c556ec72de8c57aa01e4977dcef2e8bc5e60a0bbef422e8dce9625d0807df715\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04\"" Jan 23 20:37:21.010854 containerd[1575]: time="2026-01-23T20:37:21.010805340Z" level=info msg="CreateContainer within sandbox \"ea79f310a1a46d8a7df36a1a806b34c412fba2fca1b454ad237c00350b0825b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862\"" Jan 23 20:37:21.011532 containerd[1575]: time="2026-01-23T20:37:21.011511668Z" level=info msg="StartContainer for \"b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04\"" Jan 23 20:37:21.011928 containerd[1575]: time="2026-01-23T20:37:21.011881708Z" level=info msg="StartContainer for \"d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f\"" Jan 23 20:37:21.014760 containerd[1575]: time="2026-01-23T20:37:21.014734602Z" level=info msg="connecting to shim d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f" address="unix:///run/containerd/s/2c7c1a8f408856464187735ecf19de65063e3e4d9e6ad3b2788d96175c088579" protocol=ttrpc version=3 Jan 23 20:37:21.016278 containerd[1575]: time="2026-01-23T20:37:21.015238110Z" level=info msg="StartContainer for \"dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862\"" Jan 23 20:37:21.016366 containerd[1575]: time="2026-01-23T20:37:21.015816456Z" level=info msg="connecting to shim b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04" address="unix:///run/containerd/s/33cd6e51e16018ebdfcb9e0c87ce4af8562a75b94a5e5eb4b9b0314988deffb4" protocol=ttrpc version=3 Jan 23 20:37:21.017093 containerd[1575]: time="2026-01-23T20:37:21.017068133Z" level=info msg="connecting to shim dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862" address="unix:///run/containerd/s/d26ba0af33745753cada0e5985dfc1a0ccfc16bbb736a8b36a65dbe72b0f93c6" protocol=ttrpc version=3 Jan 23 20:37:21.022376 kubelet[2464]: E0123 20:37:21.022334 2464 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.93.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-zm8g6.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 20:37:21.045932 systemd[1]: Started cri-containerd-dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862.scope - libcontainer container dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862. Jan 23 20:37:21.058432 systemd[1]: Started cri-containerd-d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f.scope - libcontainer container d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f. Jan 23 20:37:21.063228 systemd[1]: Started cri-containerd-b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04.scope - libcontainer container b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04. Jan 23 20:37:21.146206 containerd[1575]: time="2026-01-23T20:37:21.146109023Z" level=info msg="StartContainer for \"d4b863be65b886e447a062e58e499333d0946b199eb5eecb3160a4f9634d770f\" returns successfully" Jan 23 20:37:21.154972 containerd[1575]: time="2026-01-23T20:37:21.154823308Z" level=info msg="StartContainer for \"b5c9dd2b2b5cd0a5d2f39585153b6a4a0e0c50a7e6a40c5c484eb63cd0d64c04\" returns successfully" Jan 23 20:37:21.181240 containerd[1575]: time="2026-01-23T20:37:21.181143970Z" level=info msg="StartContainer for \"dd7778c452c5802b92fa2b4638fce9f681f50d98d7c180e095f5613549da7862\" returns successfully" Jan 23 20:37:21.229836 kubelet[2464]: E0123 20:37:21.229539 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:21.230947 kubelet[2464]: E0123 20:37:21.230923 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:21.235324 kubelet[2464]: E0123 20:37:21.233831 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:21.247339 kubelet[2464]: E0123 20:37:21.247312 2464 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.93.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.93.250:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 20:37:21.776102 kubelet[2464]: I0123 20:37:21.776075 2464 kubelet_node_status.go:75] "Attempting to register node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:22.235175 kubelet[2464]: E0123 20:37:22.235138 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:22.235653 kubelet[2464]: E0123 20:37:22.235597 2464 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.814044 kubelet[2464]: E0123 20:37:23.813997 2464 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-zm8g6.gb1.brightbox.com\" not found" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.887388 kubelet[2464]: E0123 20:37:23.886903 2464 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-zm8g6.gb1.brightbox.com.188d769bccab0a8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-zm8g6.gb1.brightbox.com,UID:srv-zm8g6.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-zm8g6.gb1.brightbox.com,},FirstTimestamp:2026-01-23 20:37:20.15224283 +0000 UTC m=+0.718659590,LastTimestamp:2026-01-23 20:37:20.15224283 +0000 UTC m=+0.718659590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-zm8g6.gb1.brightbox.com,}" Jan 23 20:37:23.946636 kubelet[2464]: I0123 20:37:23.946333 2464 kubelet_node_status.go:78] "Successfully registered node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.969473 kubelet[2464]: I0123 20:37:23.969371 2464 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.986461 kubelet[2464]: E0123 20:37:23.986129 2464 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-zm8g6.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.986461 kubelet[2464]: I0123 20:37:23.986169 2464 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.988360 kubelet[2464]: E0123 20:37:23.988111 2464 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.988360 kubelet[2464]: I0123 20:37:23.988137 2464 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:23.990186 kubelet[2464]: E0123 20:37:23.990160 2464 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:24.144920 kubelet[2464]: I0123 20:37:24.144818 2464 apiserver.go:52] "Watching apiserver" Jan 23 20:37:24.170042 kubelet[2464]: I0123 20:37:24.169942 2464 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 20:37:25.187778 kubelet[2464]: I0123 20:37:25.187735 2464 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:25.193401 kubelet[2464]: I0123 20:37:25.193033 2464 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 20:37:25.913859 systemd[1]: Reload requested from client PID 2763 ('systemctl') (unit session-11.scope)... Jan 23 20:37:25.913882 systemd[1]: Reloading... Jan 23 20:37:26.042293 zram_generator::config[2811]: No configuration found. Jan 23 20:37:26.333822 systemd[1]: Reloading finished in 419 ms. Jan 23 20:37:26.371076 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:37:26.384944 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 20:37:26.385589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:37:26.385712 systemd[1]: kubelet.service: Consumed 1.222s CPU time, 127.9M memory peak. Jan 23 20:37:26.391872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 20:37:26.596183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 20:37:26.606716 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 20:37:26.690337 kubelet[2871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:37:26.690337 kubelet[2871]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 20:37:26.690337 kubelet[2871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 20:37:26.690337 kubelet[2871]: I0123 20:37:26.689617 2871 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 20:37:26.702108 kubelet[2871]: I0123 20:37:26.702051 2871 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 20:37:26.702108 kubelet[2871]: I0123 20:37:26.702093 2871 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 20:37:26.702477 kubelet[2871]: I0123 20:37:26.702457 2871 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 20:37:26.704035 kubelet[2871]: I0123 20:37:26.704005 2871 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 20:37:26.709145 kubelet[2871]: I0123 20:37:26.709058 2871 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 20:37:26.719029 kubelet[2871]: I0123 20:37:26.718991 2871 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 20:37:26.732072 kubelet[2871]: I0123 20:37:26.732031 2871 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 20:37:26.734287 kubelet[2871]: I0123 20:37:26.733773 2871 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 20:37:26.734287 kubelet[2871]: I0123 20:37:26.733812 2871 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-zm8g6.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 20:37:26.734287 kubelet[2871]: I0123 20:37:26.734120 2871 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 20:37:26.734287 kubelet[2871]: I0123 20:37:26.734134 2871 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 20:37:26.734618 kubelet[2871]: I0123 20:37:26.734603 2871 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:37:26.734884 kubelet[2871]: I0123 20:37:26.734871 2871 kubelet.go:480] "Attempting to sync node with API server" Jan 23 20:37:26.734978 kubelet[2871]: I0123 20:37:26.734968 2871 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 20:37:26.735077 kubelet[2871]: I0123 20:37:26.735068 2871 kubelet.go:386] "Adding apiserver pod source" Jan 23 20:37:26.735142 kubelet[2871]: I0123 20:37:26.735135 2871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 20:37:26.745288 kubelet[2871]: I0123 20:37:26.744754 2871 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 20:37:26.747282 kubelet[2871]: I0123 20:37:26.746099 2871 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 20:37:26.755799 kubelet[2871]: I0123 20:37:26.755728 2871 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 20:37:26.755956 kubelet[2871]: I0123 20:37:26.755936 2871 server.go:1289] "Started kubelet" Jan 23 20:37:26.758844 kubelet[2871]: I0123 20:37:26.758798 2871 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 20:37:26.763305 kubelet[2871]: I0123 20:37:26.762919 2871 server.go:317] "Adding debug handlers to kubelet server" Jan 23 20:37:26.765042 kubelet[2871]: I0123 20:37:26.765022 2871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 20:37:26.768557 kubelet[2871]: E0123 20:37:26.768537 2871 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 20:37:26.769040 kubelet[2871]: I0123 20:37:26.759084 2871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 20:37:26.770528 kubelet[2871]: I0123 20:37:26.770512 2871 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 20:37:26.771008 kubelet[2871]: I0123 20:37:26.770989 2871 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 20:37:26.774964 kubelet[2871]: I0123 20:37:26.774894 2871 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 20:37:26.775180 kubelet[2871]: I0123 20:37:26.775168 2871 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 20:37:26.775380 kubelet[2871]: I0123 20:37:26.775370 2871 reconciler.go:26] "Reconciler: start to sync state" Jan 23 20:37:26.776264 kubelet[2871]: I0123 20:37:26.776247 2871 factory.go:223] Registration of the systemd container factory successfully Jan 23 20:37:26.776483 kubelet[2871]: I0123 20:37:26.776464 2871 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 20:37:26.780115 kubelet[2871]: I0123 20:37:26.780080 2871 factory.go:223] Registration of the containerd container factory successfully Jan 23 20:37:26.797769 kubelet[2871]: I0123 20:37:26.797733 2871 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 20:37:26.809614 kubelet[2871]: I0123 20:37:26.808599 2871 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 20:37:26.809614 kubelet[2871]: I0123 20:37:26.808652 2871 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 20:37:26.809614 kubelet[2871]: I0123 20:37:26.808681 2871 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 20:37:26.809614 kubelet[2871]: I0123 20:37:26.808691 2871 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 20:37:26.809614 kubelet[2871]: E0123 20:37:26.808831 2871 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 20:37:26.859381 kubelet[2871]: I0123 20:37:26.859183 2871 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 20:37:26.859381 kubelet[2871]: I0123 20:37:26.859206 2871 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 20:37:26.859381 kubelet[2871]: I0123 20:37:26.859229 2871 state_mem.go:36] "Initialized new in-memory state store" Jan 23 20:37:26.860020 kubelet[2871]: I0123 20:37:26.859988 2871 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 20:37:26.860020 kubelet[2871]: I0123 20:37:26.860009 2871 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 20:37:26.860082 kubelet[2871]: I0123 20:37:26.860034 2871 policy_none.go:49] "None policy: Start" Jan 23 20:37:26.860082 kubelet[2871]: I0123 20:37:26.860051 2871 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 20:37:26.860082 kubelet[2871]: I0123 20:37:26.860063 2871 state_mem.go:35] "Initializing new in-memory state store" Jan 23 20:37:26.860180 kubelet[2871]: I0123 20:37:26.860166 2871 state_mem.go:75] "Updated machine memory state" Jan 23 20:37:26.867037 kubelet[2871]: E0123 20:37:26.866820 2871 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 20:37:26.867149 kubelet[2871]: I0123 20:37:26.867056 2871 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 20:37:26.867149 kubelet[2871]: I0123 20:37:26.867072 2871 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 20:37:26.868346 kubelet[2871]: I0123 20:37:26.868325 2871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 20:37:26.870711 kubelet[2871]: E0123 20:37:26.870688 2871 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 20:37:26.912203 kubelet[2871]: I0123 20:37:26.910836 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:26.913315 kubelet[2871]: I0123 20:37:26.912436 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:26.913535 kubelet[2871]: I0123 20:37:26.912828 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:26.919439 kubelet[2871]: I0123 20:37:26.919406 2871 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 20:37:26.922289 kubelet[2871]: I0123 20:37:26.921328 2871 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 20:37:26.923436 kubelet[2871]: I0123 20:37:26.923413 2871 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 20:37:26.923526 kubelet[2871]: E0123 20:37:26.923463 2871 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-zm8g6.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:26.926691 sudo[2912]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 20:37:26.927477 sudo[2912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 20:37:26.978747 kubelet[2871]: I0123 20:37:26.978679 2871 kubelet_node_status.go:75] "Attempting to register node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:26.993956 kubelet[2871]: I0123 20:37:26.993893 2871 kubelet_node_status.go:124] "Node was previously registered" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:26.994164 kubelet[2871]: I0123 20:37:26.994038 2871 kubelet_node_status.go:78] "Successfully registered node" node="srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.077636 kubelet[2871]: I0123 20:37:27.077558 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-ca-certs\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.077636 kubelet[2871]: I0123 20:37:27.077611 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-k8s-certs\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.077636 kubelet[2871]: I0123 20:37:27.077642 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fea85c7de01f4e6076cae7d607872dee-ca-certs\") pod \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" (UID: \"fea85c7de01f4e6076cae7d607872dee\") " pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.077636 kubelet[2871]: I0123 20:37:27.077666 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fea85c7de01f4e6076cae7d607872dee-usr-share-ca-certificates\") pod \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" (UID: \"fea85c7de01f4e6076cae7d607872dee\") " pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.078190 kubelet[2871]: I0123 20:37:27.077688 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-flexvolume-dir\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.078190 kubelet[2871]: I0123 20:37:27.077708 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-kubeconfig\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.078190 kubelet[2871]: I0123 20:37:27.077730 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/feb77ccb1142fa8b0c6a699106052bd7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-zm8g6.gb1.brightbox.com\" (UID: \"feb77ccb1142fa8b0c6a699106052bd7\") " pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.078190 kubelet[2871]: I0123 20:37:27.077767 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a18d9620a5ebb182ce23cd2aef20fb1-kubeconfig\") pod \"kube-scheduler-srv-zm8g6.gb1.brightbox.com\" (UID: \"6a18d9620a5ebb182ce23cd2aef20fb1\") " pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.078190 kubelet[2871]: I0123 20:37:27.077786 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fea85c7de01f4e6076cae7d607872dee-k8s-certs\") pod \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" (UID: \"fea85c7de01f4e6076cae7d607872dee\") " pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.394666 sudo[2912]: pam_unix(sudo:session): session closed for user root Jan 23 20:37:27.749797 kubelet[2871]: I0123 20:37:27.748492 2871 apiserver.go:52] "Watching apiserver" Jan 23 20:37:27.776174 kubelet[2871]: I0123 20:37:27.776015 2871 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 20:37:27.843625 kubelet[2871]: I0123 20:37:27.843458 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.844624 kubelet[2871]: I0123 20:37:27.844463 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.856767 kubelet[2871]: I0123 20:37:27.856036 2871 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 20:37:27.856767 kubelet[2871]: E0123 20:37:27.856090 2871 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-zm8g6.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.857036 kubelet[2871]: I0123 20:37:27.857020 2871 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 20:37:27.857704 kubelet[2871]: E0123 20:37:27.857594 2871 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-zm8g6.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" Jan 23 20:37:27.891934 kubelet[2871]: I0123 20:37:27.891631 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-zm8g6.gb1.brightbox.com" podStartSLOduration=1.891604901 podStartE2EDuration="1.891604901s" podCreationTimestamp="2026-01-23 20:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:37:27.879243311 +0000 UTC m=+1.265630268" watchObservedRunningTime="2026-01-23 20:37:27.891604901 +0000 UTC m=+1.277991834" Jan 23 20:37:27.904456 kubelet[2871]: I0123 20:37:27.903825 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-zm8g6.gb1.brightbox.com" podStartSLOduration=1.903782866 podStartE2EDuration="1.903782866s" podCreationTimestamp="2026-01-23 20:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:37:27.892494759 +0000 UTC m=+1.278881716" watchObservedRunningTime="2026-01-23 20:37:27.903782866 +0000 UTC m=+1.290169917" Jan 23 20:37:27.922924 kubelet[2871]: I0123 20:37:27.922671 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-zm8g6.gb1.brightbox.com" podStartSLOduration=2.922646779 podStartE2EDuration="2.922646779s" podCreationTimestamp="2026-01-23 20:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:37:27.906461278 +0000 UTC m=+1.292848263" watchObservedRunningTime="2026-01-23 20:37:27.922646779 +0000 UTC m=+1.309033715" Jan 23 20:37:29.006897 sudo[1873]: pam_unix(sudo:session): session closed for user root Jan 23 20:37:29.097781 sshd[1872]: Connection closed by 68.220.241.50 port 33922 Jan 23 20:37:29.099031 sshd-session[1869]: pam_unix(sshd:session): session closed for user core Jan 23 20:37:29.108222 systemd[1]: sshd@8-10.244.93.250:22-68.220.241.50:33922.service: Deactivated successfully. Jan 23 20:37:29.109052 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Jan 23 20:37:29.112309 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 20:37:29.112597 systemd[1]: session-11.scope: Consumed 6.497s CPU time, 211M memory peak. Jan 23 20:37:29.117736 systemd-logind[1547]: Removed session 11. Jan 23 20:37:32.297771 kubelet[2871]: I0123 20:37:32.297729 2871 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 20:37:32.298400 containerd[1575]: time="2026-01-23T20:37:32.298069227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 20:37:32.299398 kubelet[2871]: I0123 20:37:32.299377 2871 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 20:37:33.387306 systemd[1]: Created slice kubepods-besteffort-podf010cc55_0ce0_4750_98ec_2109731d0103.slice - libcontainer container kubepods-besteffort-podf010cc55_0ce0_4750_98ec_2109731d0103.slice. Jan 23 20:37:33.401798 kubelet[2871]: E0123 20:37:33.401764 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-zm8g6.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-zm8g6.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Jan 23 20:37:33.402742 kubelet[2871]: E0123 20:37:33.402460 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-zm8g6.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-zm8g6.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Jan 23 20:37:33.402742 kubelet[2871]: E0123 20:37:33.402536 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-zm8g6.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-zm8g6.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Jan 23 20:37:33.408171 systemd[1]: Created slice kubepods-burstable-pod72259001_6d43_408b_9c34_d7aa5bf12ed4.slice - libcontainer container kubepods-burstable-pod72259001_6d43_408b_9c34_d7aa5bf12ed4.slice. Jan 23 20:37:33.422746 kubelet[2871]: I0123 20:37:33.422687 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72259001-6d43-408b-9c34-d7aa5bf12ed4-clustermesh-secrets\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.422746 kubelet[2871]: I0123 20:37:33.422734 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-hubble-tls\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.422746 kubelet[2871]: I0123 20:37:33.422752 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cni-path\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423038 kubelet[2871]: I0123 20:37:33.422767 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-lib-modules\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423038 kubelet[2871]: I0123 20:37:33.422793 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r59pl\" (UniqueName: \"kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-kube-api-access-r59pl\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423038 kubelet[2871]: I0123 20:37:33.422820 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f010cc55-0ce0-4750-98ec-2109731d0103-lib-modules\") pod \"kube-proxy-m4hps\" (UID: \"f010cc55-0ce0-4750-98ec-2109731d0103\") " pod="kube-system/kube-proxy-m4hps" Jan 23 20:37:33.423038 kubelet[2871]: I0123 20:37:33.422848 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-bpf-maps\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423038 kubelet[2871]: I0123 20:37:33.422874 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-net\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423038 kubelet[2871]: I0123 20:37:33.422890 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f010cc55-0ce0-4750-98ec-2109731d0103-kube-proxy\") pod \"kube-proxy-m4hps\" (UID: \"f010cc55-0ce0-4750-98ec-2109731d0103\") " pod="kube-system/kube-proxy-m4hps" Jan 23 20:37:33.423302 kubelet[2871]: I0123 20:37:33.422904 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f010cc55-0ce0-4750-98ec-2109731d0103-xtables-lock\") pod \"kube-proxy-m4hps\" (UID: \"f010cc55-0ce0-4750-98ec-2109731d0103\") " pod="kube-system/kube-proxy-m4hps" Jan 23 20:37:33.423302 kubelet[2871]: I0123 20:37:33.422918 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-cgroup\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423302 kubelet[2871]: I0123 20:37:33.423280 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-etc-cni-netd\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423415 kubelet[2871]: I0123 20:37:33.423303 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-config-path\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423415 kubelet[2871]: I0123 20:37:33.423320 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-kernel\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423415 kubelet[2871]: I0123 20:37:33.423356 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4j7\" (UniqueName: \"kubernetes.io/projected/f010cc55-0ce0-4750-98ec-2109731d0103-kube-api-access-cv4j7\") pod \"kube-proxy-m4hps\" (UID: \"f010cc55-0ce0-4750-98ec-2109731d0103\") " pod="kube-system/kube-proxy-m4hps" Jan 23 20:37:33.423415 kubelet[2871]: I0123 20:37:33.423374 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-run\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423415 kubelet[2871]: I0123 20:37:33.423388 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-hostproc\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.423603 kubelet[2871]: I0123 20:37:33.423409 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-xtables-lock\") pod \"cilium-7j87l\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " pod="kube-system/cilium-7j87l" Jan 23 20:37:33.483377 systemd[1]: Created slice kubepods-besteffort-poddba5cb81_ba86_479e_ad16_7e3dd3f5592c.slice - libcontainer container kubepods-besteffort-poddba5cb81_ba86_479e_ad16_7e3dd3f5592c.slice. Jan 23 20:37:33.524307 kubelet[2871]: I0123 20:37:33.523974 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w46hz\" (UID: \"dba5cb81-ba86-479e-ad16-7e3dd3f5592c\") " pod="kube-system/cilium-operator-6c4d7847fc-w46hz" Jan 23 20:37:33.524307 kubelet[2871]: I0123 20:37:33.524048 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4wl\" (UniqueName: \"kubernetes.io/projected/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-kube-api-access-8p4wl\") pod \"cilium-operator-6c4d7847fc-w46hz\" (UID: \"dba5cb81-ba86-479e-ad16-7e3dd3f5592c\") " pod="kube-system/cilium-operator-6c4d7847fc-w46hz" Jan 23 20:37:33.702159 containerd[1575]: time="2026-01-23T20:37:33.701947228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4hps,Uid:f010cc55-0ce0-4750-98ec-2109731d0103,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:33.726504 containerd[1575]: time="2026-01-23T20:37:33.726426410Z" level=info msg="connecting to shim 9d324a87bab5ac65a00e35242db38db28844ad9452a4d27b3ada47d4e9668d2c" address="unix:///run/containerd/s/8a05749b8746d177e7adcff36c5f39b2fd695a7a1aebebe958e572409b6c7d61" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:33.769467 systemd[1]: Started cri-containerd-9d324a87bab5ac65a00e35242db38db28844ad9452a4d27b3ada47d4e9668d2c.scope - libcontainer container 9d324a87bab5ac65a00e35242db38db28844ad9452a4d27b3ada47d4e9668d2c. Jan 23 20:37:33.803553 containerd[1575]: time="2026-01-23T20:37:33.803502573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4hps,Uid:f010cc55-0ce0-4750-98ec-2109731d0103,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d324a87bab5ac65a00e35242db38db28844ad9452a4d27b3ada47d4e9668d2c\"" Jan 23 20:37:33.810231 containerd[1575]: time="2026-01-23T20:37:33.810191131Z" level=info msg="CreateContainer within sandbox \"9d324a87bab5ac65a00e35242db38db28844ad9452a4d27b3ada47d4e9668d2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 20:37:33.818725 containerd[1575]: time="2026-01-23T20:37:33.818686755Z" level=info msg="Container 072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:33.824453 containerd[1575]: time="2026-01-23T20:37:33.824389086Z" level=info msg="CreateContainer within sandbox \"9d324a87bab5ac65a00e35242db38db28844ad9452a4d27b3ada47d4e9668d2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e\"" Jan 23 20:37:33.825707 containerd[1575]: time="2026-01-23T20:37:33.825675555Z" level=info msg="StartContainer for \"072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e\"" Jan 23 20:37:33.828089 containerd[1575]: time="2026-01-23T20:37:33.828051240Z" level=info msg="connecting to shim 072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e" address="unix:///run/containerd/s/8a05749b8746d177e7adcff36c5f39b2fd695a7a1aebebe958e572409b6c7d61" protocol=ttrpc version=3 Jan 23 20:37:33.858789 systemd[1]: Started cri-containerd-072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e.scope - libcontainer container 072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e. Jan 23 20:37:33.946428 containerd[1575]: time="2026-01-23T20:37:33.946372783Z" level=info msg="StartContainer for \"072868de4c999039257c14d08fc0831188791b16be273ce385b579852c92ea6e\" returns successfully" Jan 23 20:37:34.535010 kubelet[2871]: E0123 20:37:34.534842 2871 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 20:37:34.536660 kubelet[2871]: E0123 20:37:34.536591 2871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-config-path podName:72259001-6d43-408b-9c34-d7aa5bf12ed4 nodeName:}" failed. No retries permitted until 2026-01-23 20:37:35.035223854 +0000 UTC m=+8.421610790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-config-path") pod "cilium-7j87l" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4") : failed to sync configmap cache: timed out waiting for the condition Jan 23 20:37:34.626029 kubelet[2871]: E0123 20:37:34.625746 2871 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 20:37:34.626029 kubelet[2871]: E0123 20:37:34.625834 2871 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-cilium-config-path podName:dba5cb81-ba86-479e-ad16-7e3dd3f5592c nodeName:}" failed. No retries permitted until 2026-01-23 20:37:35.125811314 +0000 UTC m=+8.512198246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-cilium-config-path") pod "cilium-operator-6c4d7847fc-w46hz" (UID: "dba5cb81-ba86-479e-ad16-7e3dd3f5592c") : failed to sync configmap cache: timed out waiting for the condition Jan 23 20:37:34.888614 kubelet[2871]: I0123 20:37:34.888458 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4hps" podStartSLOduration=1.8884356979999999 podStartE2EDuration="1.888435698s" podCreationTimestamp="2026-01-23 20:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:37:34.88770486 +0000 UTC m=+8.274091823" watchObservedRunningTime="2026-01-23 20:37:34.888435698 +0000 UTC m=+8.274822655" Jan 23 20:37:35.216147 containerd[1575]: time="2026-01-23T20:37:35.214464590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j87l,Uid:72259001-6d43-408b-9c34-d7aa5bf12ed4,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:35.249697 containerd[1575]: time="2026-01-23T20:37:35.249606004Z" level=info msg="connecting to shim 34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297" address="unix:///run/containerd/s/38d97f6ec1632259ad176e2acd29723d27f22143d7ef0eaaeb3a46685ad0d587" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:35.283423 systemd[1]: Started cri-containerd-34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297.scope - libcontainer container 34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297. Jan 23 20:37:35.288100 containerd[1575]: time="2026-01-23T20:37:35.287753188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w46hz,Uid:dba5cb81-ba86-479e-ad16-7e3dd3f5592c,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:35.314803 containerd[1575]: time="2026-01-23T20:37:35.314399096Z" level=info msg="connecting to shim bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a" address="unix:///run/containerd/s/409d0d73747e891501ba06a7411046ae238b7c073aa5059fb14c5ff901687443" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:35.337443 containerd[1575]: time="2026-01-23T20:37:35.337409133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j87l,Uid:72259001-6d43-408b-9c34-d7aa5bf12ed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\"" Jan 23 20:37:35.341841 containerd[1575]: time="2026-01-23T20:37:35.341810663Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 20:37:35.355428 systemd[1]: Started cri-containerd-bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a.scope - libcontainer container bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a. Jan 23 20:37:35.417072 containerd[1575]: time="2026-01-23T20:37:35.416961836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w46hz,Uid:dba5cb81-ba86-479e-ad16-7e3dd3f5592c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\"" Jan 23 20:37:44.053425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442664612.mount: Deactivated successfully. Jan 23 20:37:46.087757 containerd[1575]: time="2026-01-23T20:37:46.087688991Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:46.090103 containerd[1575]: time="2026-01-23T20:37:46.090056122Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 20:37:46.090513 containerd[1575]: time="2026-01-23T20:37:46.090450400Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:46.092942 containerd[1575]: time="2026-01-23T20:37:46.092904389Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.751058489s" Jan 23 20:37:46.093454 containerd[1575]: time="2026-01-23T20:37:46.093078006Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 20:37:46.097508 containerd[1575]: time="2026-01-23T20:37:46.097382953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 20:37:46.100324 containerd[1575]: time="2026-01-23T20:37:46.099361250Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 20:37:46.117110 containerd[1575]: time="2026-01-23T20:37:46.115861649Z" level=info msg="Container 385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:46.119848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313623403.mount: Deactivated successfully. Jan 23 20:37:46.123797 containerd[1575]: time="2026-01-23T20:37:46.123736656Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\"" Jan 23 20:37:46.125437 containerd[1575]: time="2026-01-23T20:37:46.124550817Z" level=info msg="StartContainer for \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\"" Jan 23 20:37:46.125866 containerd[1575]: time="2026-01-23T20:37:46.125833220Z" level=info msg="connecting to shim 385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744" address="unix:///run/containerd/s/38d97f6ec1632259ad176e2acd29723d27f22143d7ef0eaaeb3a46685ad0d587" protocol=ttrpc version=3 Jan 23 20:37:46.213696 systemd[1]: Started cri-containerd-385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744.scope - libcontainer container 385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744. Jan 23 20:37:46.274222 containerd[1575]: time="2026-01-23T20:37:46.274026642Z" level=info msg="StartContainer for \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\" returns successfully" Jan 23 20:37:46.298723 systemd[1]: cri-containerd-385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744.scope: Deactivated successfully. Jan 23 20:37:46.337427 containerd[1575]: time="2026-01-23T20:37:46.337205081Z" level=info msg="received container exit event container_id:\"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\" id:\"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\" pid:3294 exited_at:{seconds:1769200666 nanos:301709993}" Jan 23 20:37:46.372185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744-rootfs.mount: Deactivated successfully. Jan 23 20:37:46.914229 containerd[1575]: time="2026-01-23T20:37:46.914128874Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 20:37:46.939532 containerd[1575]: time="2026-01-23T20:37:46.939036935Z" level=info msg="Container 3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:46.959443 containerd[1575]: time="2026-01-23T20:37:46.959385376Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\"" Jan 23 20:37:46.962822 containerd[1575]: time="2026-01-23T20:37:46.962785580Z" level=info msg="StartContainer for \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\"" Jan 23 20:37:46.967638 containerd[1575]: time="2026-01-23T20:37:46.967397638Z" level=info msg="connecting to shim 3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8" address="unix:///run/containerd/s/38d97f6ec1632259ad176e2acd29723d27f22143d7ef0eaaeb3a46685ad0d587" protocol=ttrpc version=3 Jan 23 20:37:46.994548 systemd[1]: Started cri-containerd-3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8.scope - libcontainer container 3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8. Jan 23 20:37:47.033714 containerd[1575]: time="2026-01-23T20:37:47.033653787Z" level=info msg="StartContainer for \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\" returns successfully" Jan 23 20:37:47.048508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 20:37:47.048713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:37:47.048894 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:37:47.050850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 20:37:47.060707 systemd[1]: cri-containerd-3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8.scope: Deactivated successfully. Jan 23 20:37:47.064385 containerd[1575]: time="2026-01-23T20:37:47.064250386Z" level=info msg="received container exit event container_id:\"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\" id:\"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\" pid:3342 exited_at:{seconds:1769200667 nanos:61852137}" Jan 23 20:37:47.079016 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 20:37:47.653697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263344899.mount: Deactivated successfully. Jan 23 20:37:47.922245 containerd[1575]: time="2026-01-23T20:37:47.922126798Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 20:37:47.974309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736031711.mount: Deactivated successfully. Jan 23 20:37:47.986135 containerd[1575]: time="2026-01-23T20:37:47.986067507Z" level=info msg="Container 265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:48.040792 containerd[1575]: time="2026-01-23T20:37:48.040727001Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\"" Jan 23 20:37:48.044895 containerd[1575]: time="2026-01-23T20:37:48.043448730Z" level=info msg="StartContainer for \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\"" Jan 23 20:37:48.050420 containerd[1575]: time="2026-01-23T20:37:48.050389386Z" level=info msg="connecting to shim 265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d" address="unix:///run/containerd/s/38d97f6ec1632259ad176e2acd29723d27f22143d7ef0eaaeb3a46685ad0d587" protocol=ttrpc version=3 Jan 23 20:37:48.086106 systemd[1]: Started cri-containerd-265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d.scope - libcontainer container 265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d. Jan 23 20:37:48.185487 containerd[1575]: time="2026-01-23T20:37:48.184535057Z" level=info msg="StartContainer for \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\" returns successfully" Jan 23 20:37:48.194391 systemd[1]: cri-containerd-265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d.scope: Deactivated successfully. Jan 23 20:37:48.194738 systemd[1]: cri-containerd-265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d.scope: Consumed 37ms CPU time, 5.1M memory peak, 1.4M read from disk. Jan 23 20:37:48.198629 containerd[1575]: time="2026-01-23T20:37:48.198590457Z" level=info msg="received container exit event container_id:\"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\" id:\"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\" pid:3399 exited_at:{seconds:1769200668 nanos:198288554}" Jan 23 20:37:48.228271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d-rootfs.mount: Deactivated successfully. Jan 23 20:37:48.878100 containerd[1575]: time="2026-01-23T20:37:48.878043838Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:48.878851 containerd[1575]: time="2026-01-23T20:37:48.878808991Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 20:37:48.879216 containerd[1575]: time="2026-01-23T20:37:48.879192655Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 20:37:48.881556 containerd[1575]: time="2026-01-23T20:37:48.881322738Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.783636302s" Jan 23 20:37:48.881556 containerd[1575]: time="2026-01-23T20:37:48.881390424Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 20:37:48.886821 containerd[1575]: time="2026-01-23T20:37:48.886783512Z" level=info msg="CreateContainer within sandbox \"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 20:37:48.897297 containerd[1575]: time="2026-01-23T20:37:48.895762507Z" level=info msg="Container 63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:48.911831 containerd[1575]: time="2026-01-23T20:37:48.911690676Z" level=info msg="CreateContainer within sandbox \"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\"" Jan 23 20:37:48.912884 containerd[1575]: time="2026-01-23T20:37:48.912783965Z" level=info msg="StartContainer for \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\"" Jan 23 20:37:48.914792 containerd[1575]: time="2026-01-23T20:37:48.914754144Z" level=info msg="connecting to shim 63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85" address="unix:///run/containerd/s/409d0d73747e891501ba06a7411046ae238b7c073aa5059fb14c5ff901687443" protocol=ttrpc version=3 Jan 23 20:37:48.938664 containerd[1575]: time="2026-01-23T20:37:48.938077475Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 20:37:48.958570 systemd[1]: Started cri-containerd-63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85.scope - libcontainer container 63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85. Jan 23 20:37:48.973632 containerd[1575]: time="2026-01-23T20:37:48.973524780Z" level=info msg="Container f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:48.981201 containerd[1575]: time="2026-01-23T20:37:48.981121660Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\"" Jan 23 20:37:48.982278 containerd[1575]: time="2026-01-23T20:37:48.982239559Z" level=info msg="StartContainer for \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\"" Jan 23 20:37:48.983635 containerd[1575]: time="2026-01-23T20:37:48.983601266Z" level=info msg="connecting to shim f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe" address="unix:///run/containerd/s/38d97f6ec1632259ad176e2acd29723d27f22143d7ef0eaaeb3a46685ad0d587" protocol=ttrpc version=3 Jan 23 20:37:49.013589 systemd[1]: Started cri-containerd-f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe.scope - libcontainer container f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe. Jan 23 20:37:49.026442 containerd[1575]: time="2026-01-23T20:37:49.026025749Z" level=info msg="StartContainer for \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" returns successfully" Jan 23 20:37:49.063850 systemd[1]: cri-containerd-f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe.scope: Deactivated successfully. Jan 23 20:37:49.067280 containerd[1575]: time="2026-01-23T20:37:49.067044467Z" level=info msg="received container exit event container_id:\"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\" id:\"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\" pid:3474 exited_at:{seconds:1769200669 nanos:65910041}" Jan 23 20:37:49.069277 containerd[1575]: time="2026-01-23T20:37:49.069237405Z" level=info msg="StartContainer for \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\" returns successfully" Jan 23 20:37:49.961117 containerd[1575]: time="2026-01-23T20:37:49.960936143Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 20:37:49.982280 containerd[1575]: time="2026-01-23T20:37:49.980968226Z" level=info msg="Container a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:49.987167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983513109.mount: Deactivated successfully. Jan 23 20:37:49.992900 containerd[1575]: time="2026-01-23T20:37:49.992850506Z" level=info msg="CreateContainer within sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\"" Jan 23 20:37:49.993549 containerd[1575]: time="2026-01-23T20:37:49.993530364Z" level=info msg="StartContainer for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\"" Jan 23 20:37:49.996292 containerd[1575]: time="2026-01-23T20:37:49.994648669Z" level=info msg="connecting to shim a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2" address="unix:///run/containerd/s/38d97f6ec1632259ad176e2acd29723d27f22143d7ef0eaaeb3a46685ad0d587" protocol=ttrpc version=3 Jan 23 20:37:50.035966 systemd[1]: Started cri-containerd-a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2.scope - libcontainer container a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2. Jan 23 20:37:50.140478 kubelet[2871]: I0123 20:37:50.139471 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w46hz" podStartSLOduration=3.67485018 podStartE2EDuration="17.139433628s" podCreationTimestamp="2026-01-23 20:37:33 +0000 UTC" firstStartedPulling="2026-01-23 20:37:35.418385382 +0000 UTC m=+8.804772316" lastFinishedPulling="2026-01-23 20:37:48.88296883 +0000 UTC m=+22.269355764" observedRunningTime="2026-01-23 20:37:50.046614068 +0000 UTC m=+23.433001025" watchObservedRunningTime="2026-01-23 20:37:50.139433628 +0000 UTC m=+23.525820586" Jan 23 20:37:50.150854 containerd[1575]: time="2026-01-23T20:37:50.150812605Z" level=info msg="StartContainer for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" returns successfully" Jan 23 20:37:50.392972 kubelet[2871]: I0123 20:37:50.392936 2871 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 20:37:50.447854 kubelet[2871]: I0123 20:37:50.447235 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82gjf\" (UniqueName: \"kubernetes.io/projected/779b8db9-244e-4afc-97a4-009f61b645ae-kube-api-access-82gjf\") pod \"coredns-674b8bbfcf-hn6qg\" (UID: \"779b8db9-244e-4afc-97a4-009f61b645ae\") " pod="kube-system/coredns-674b8bbfcf-hn6qg" Jan 23 20:37:50.447854 kubelet[2871]: I0123 20:37:50.447482 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/779b8db9-244e-4afc-97a4-009f61b645ae-config-volume\") pod \"coredns-674b8bbfcf-hn6qg\" (UID: \"779b8db9-244e-4afc-97a4-009f61b645ae\") " pod="kube-system/coredns-674b8bbfcf-hn6qg" Jan 23 20:37:50.451027 systemd[1]: Created slice kubepods-burstable-pod779b8db9_244e_4afc_97a4_009f61b645ae.slice - libcontainer container kubepods-burstable-pod779b8db9_244e_4afc_97a4_009f61b645ae.slice. Jan 23 20:37:50.466217 systemd[1]: Created slice kubepods-burstable-pod09e0707e_ebf5_48f6_a221_76b735164e4e.slice - libcontainer container kubepods-burstable-pod09e0707e_ebf5_48f6_a221_76b735164e4e.slice. Jan 23 20:37:50.551293 kubelet[2871]: I0123 20:37:50.548362 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz8z7\" (UniqueName: \"kubernetes.io/projected/09e0707e-ebf5-48f6-a221-76b735164e4e-kube-api-access-jz8z7\") pod \"coredns-674b8bbfcf-9pj8c\" (UID: \"09e0707e-ebf5-48f6-a221-76b735164e4e\") " pod="kube-system/coredns-674b8bbfcf-9pj8c" Jan 23 20:37:50.551293 kubelet[2871]: I0123 20:37:50.548416 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09e0707e-ebf5-48f6-a221-76b735164e4e-config-volume\") pod \"coredns-674b8bbfcf-9pj8c\" (UID: \"09e0707e-ebf5-48f6-a221-76b735164e4e\") " pod="kube-system/coredns-674b8bbfcf-9pj8c" Jan 23 20:37:50.772005 containerd[1575]: time="2026-01-23T20:37:50.771881494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pj8c,Uid:09e0707e-ebf5-48f6-a221-76b735164e4e,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:50.777195 containerd[1575]: time="2026-01-23T20:37:50.777044444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hn6qg,Uid:779b8db9-244e-4afc-97a4-009f61b645ae,Namespace:kube-system,Attempt:0,}" Jan 23 20:37:50.983197 kubelet[2871]: I0123 20:37:50.983108 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7j87l" podStartSLOduration=7.228245014 podStartE2EDuration="17.983090705s" podCreationTimestamp="2026-01-23 20:37:33 +0000 UTC" firstStartedPulling="2026-01-23 20:37:35.33959157 +0000 UTC m=+8.725978500" lastFinishedPulling="2026-01-23 20:37:46.094437239 +0000 UTC m=+19.480824191" observedRunningTime="2026-01-23 20:37:50.981543431 +0000 UTC m=+24.367930365" watchObservedRunningTime="2026-01-23 20:37:50.983090705 +0000 UTC m=+24.369477662" Jan 23 20:37:52.697875 systemd-networkd[1495]: cilium_host: Link UP Jan 23 20:37:52.700625 systemd-networkd[1495]: cilium_net: Link UP Jan 23 20:37:52.701768 systemd-networkd[1495]: cilium_net: Gained carrier Jan 23 20:37:52.702877 systemd-networkd[1495]: cilium_host: Gained carrier Jan 23 20:37:52.771358 systemd-networkd[1495]: cilium_host: Gained IPv6LL Jan 23 20:37:52.855186 systemd-networkd[1495]: cilium_vxlan: Link UP Jan 23 20:37:52.855195 systemd-networkd[1495]: cilium_vxlan: Gained carrier Jan 23 20:37:53.288836 kernel: NET: Registered PF_ALG protocol family Jan 23 20:37:53.499868 systemd-networkd[1495]: cilium_net: Gained IPv6LL Jan 23 20:37:54.125820 systemd-networkd[1495]: lxc_health: Link UP Jan 23 20:37:54.142155 systemd-networkd[1495]: lxc_health: Gained carrier Jan 23 20:37:54.369818 systemd-networkd[1495]: lxca74dc1b24305: Link UP Jan 23 20:37:54.385780 kernel: eth0: renamed from tmpcb94d Jan 23 20:37:54.408976 systemd-networkd[1495]: lxca74dc1b24305: Gained carrier Jan 23 20:37:54.410431 systemd-networkd[1495]: lxcca7b30c7cb2a: Link UP Jan 23 20:37:54.423811 kernel: eth0: renamed from tmpefedb Jan 23 20:37:54.425846 systemd-networkd[1495]: lxcca7b30c7cb2a: Gained carrier Jan 23 20:37:54.844465 systemd-networkd[1495]: cilium_vxlan: Gained IPv6LL Jan 23 20:37:55.227526 systemd-networkd[1495]: lxc_health: Gained IPv6LL Jan 23 20:37:55.931519 systemd-networkd[1495]: lxcca7b30c7cb2a: Gained IPv6LL Jan 23 20:37:56.316589 systemd-networkd[1495]: lxca74dc1b24305: Gained IPv6LL Jan 23 20:37:58.796245 containerd[1575]: time="2026-01-23T20:37:58.796180593Z" level=info msg="connecting to shim cb94d24804d757a6241b3ff357f2620ffed63a287179f5f298a6677baadd7c73" address="unix:///run/containerd/s/58967125757084bad536c0d0899bc25c80e5670db33eba46d24522e11029ee65" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:58.832288 containerd[1575]: time="2026-01-23T20:37:58.831807070Z" level=info msg="connecting to shim efedb9fa6cbc32842581c88e3c993eebfacf85744ad3a3c63446f159a31b329a" address="unix:///run/containerd/s/f20c06ab1e03f580a0f2534ded74e41852c61329dc557d8fc97f12a6c321bdfc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:37:58.847459 systemd[1]: Started cri-containerd-cb94d24804d757a6241b3ff357f2620ffed63a287179f5f298a6677baadd7c73.scope - libcontainer container cb94d24804d757a6241b3ff357f2620ffed63a287179f5f298a6677baadd7c73. Jan 23 20:37:58.877450 systemd[1]: Started cri-containerd-efedb9fa6cbc32842581c88e3c993eebfacf85744ad3a3c63446f159a31b329a.scope - libcontainer container efedb9fa6cbc32842581c88e3c993eebfacf85744ad3a3c63446f159a31b329a. Jan 23 20:37:58.985918 containerd[1575]: time="2026-01-23T20:37:58.985856779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pj8c,Uid:09e0707e-ebf5-48f6-a221-76b735164e4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb94d24804d757a6241b3ff357f2620ffed63a287179f5f298a6677baadd7c73\"" Jan 23 20:37:58.995507 containerd[1575]: time="2026-01-23T20:37:58.994000664Z" level=info msg="CreateContainer within sandbox \"cb94d24804d757a6241b3ff357f2620ffed63a287179f5f298a6677baadd7c73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 20:37:59.018189 containerd[1575]: time="2026-01-23T20:37:59.018130028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hn6qg,Uid:779b8db9-244e-4afc-97a4-009f61b645ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"efedb9fa6cbc32842581c88e3c993eebfacf85744ad3a3c63446f159a31b329a\"" Jan 23 20:37:59.033017 containerd[1575]: time="2026-01-23T20:37:59.032978215Z" level=info msg="CreateContainer within sandbox \"efedb9fa6cbc32842581c88e3c993eebfacf85744ad3a3c63446f159a31b329a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 20:37:59.035440 containerd[1575]: time="2026-01-23T20:37:59.035365264Z" level=info msg="Container 2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:59.040518 containerd[1575]: time="2026-01-23T20:37:59.040490512Z" level=info msg="Container 20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:37:59.041695 containerd[1575]: time="2026-01-23T20:37:59.041673524Z" level=info msg="CreateContainer within sandbox \"cb94d24804d757a6241b3ff357f2620ffed63a287179f5f298a6677baadd7c73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5\"" Jan 23 20:37:59.044213 containerd[1575]: time="2026-01-23T20:37:59.042666572Z" level=info msg="StartContainer for \"2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5\"" Jan 23 20:37:59.044213 containerd[1575]: time="2026-01-23T20:37:59.043544374Z" level=info msg="connecting to shim 2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5" address="unix:///run/containerd/s/58967125757084bad536c0d0899bc25c80e5670db33eba46d24522e11029ee65" protocol=ttrpc version=3 Jan 23 20:37:59.048069 containerd[1575]: time="2026-01-23T20:37:59.047994934Z" level=info msg="CreateContainer within sandbox \"efedb9fa6cbc32842581c88e3c993eebfacf85744ad3a3c63446f159a31b329a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2\"" Jan 23 20:37:59.048936 containerd[1575]: time="2026-01-23T20:37:59.048917631Z" level=info msg="StartContainer for \"20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2\"" Jan 23 20:37:59.051971 containerd[1575]: time="2026-01-23T20:37:59.051914548Z" level=info msg="connecting to shim 20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2" address="unix:///run/containerd/s/f20c06ab1e03f580a0f2534ded74e41852c61329dc557d8fc97f12a6c321bdfc" protocol=ttrpc version=3 Jan 23 20:37:59.069480 systemd[1]: Started cri-containerd-2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5.scope - libcontainer container 2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5. Jan 23 20:37:59.080365 systemd[1]: Started cri-containerd-20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2.scope - libcontainer container 20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2. Jan 23 20:37:59.131708 containerd[1575]: time="2026-01-23T20:37:59.131441645Z" level=info msg="StartContainer for \"20d21d6e925ee7ab0ebea8f179ecee8d05a4ff7f0ef7587852f2745191cc31d2\" returns successfully" Jan 23 20:37:59.132149 containerd[1575]: time="2026-01-23T20:37:59.132102329Z" level=info msg="StartContainer for \"2c6024036de68fd3008535824e1b5bebe439cb3c59a15889ffe4fb16f65072c5\" returns successfully" Jan 23 20:37:59.778481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221833219.mount: Deactivated successfully. Jan 23 20:38:00.046090 kubelet[2871]: I0123 20:38:00.042645 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9pj8c" podStartSLOduration=27.041919584 podStartE2EDuration="27.041919584s" podCreationTimestamp="2026-01-23 20:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:38:00.040800371 +0000 UTC m=+33.427187340" watchObservedRunningTime="2026-01-23 20:38:00.041919584 +0000 UTC m=+33.428306541" Jan 23 20:38:00.072982 kubelet[2871]: I0123 20:38:00.072892 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hn6qg" podStartSLOduration=27.0728724 podStartE2EDuration="27.0728724s" podCreationTimestamp="2026-01-23 20:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:38:00.070638369 +0000 UTC m=+33.457025322" watchObservedRunningTime="2026-01-23 20:38:00.0728724 +0000 UTC m=+33.459259357" Jan 23 20:38:38.874166 systemd[1]: Started sshd@9-10.244.93.250:22-68.220.241.50:53038.service - OpenSSH per-connection server daemon (68.220.241.50:53038). Jan 23 20:38:39.523037 sshd[4194]: Accepted publickey for core from 68.220.241.50 port 53038 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:38:39.526516 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:38:39.546058 systemd-logind[1547]: New session 12 of user core. Jan 23 20:38:39.554194 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 20:38:40.505604 sshd[4197]: Connection closed by 68.220.241.50 port 53038 Jan 23 20:38:40.505427 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Jan 23 20:38:40.518333 systemd[1]: sshd@9-10.244.93.250:22-68.220.241.50:53038.service: Deactivated successfully. Jan 23 20:38:40.522132 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 20:38:40.525785 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Jan 23 20:38:40.527553 systemd-logind[1547]: Removed session 12. Jan 23 20:38:45.622923 systemd[1]: Started sshd@10-10.244.93.250:22-68.220.241.50:42838.service - OpenSSH per-connection server daemon (68.220.241.50:42838). Jan 23 20:38:46.213091 sshd[4210]: Accepted publickey for core from 68.220.241.50 port 42838 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:38:46.216885 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:38:46.225856 systemd-logind[1547]: New session 13 of user core. Jan 23 20:38:46.231414 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 20:38:46.720582 sshd[4213]: Connection closed by 68.220.241.50 port 42838 Jan 23 20:38:46.721416 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jan 23 20:38:46.733705 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Jan 23 20:38:46.734434 systemd[1]: sshd@10-10.244.93.250:22-68.220.241.50:42838.service: Deactivated successfully. Jan 23 20:38:46.740889 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 20:38:46.745171 systemd-logind[1547]: Removed session 13. Jan 23 20:38:51.840323 systemd[1]: Started sshd@11-10.244.93.250:22-68.220.241.50:42846.service - OpenSSH per-connection server daemon (68.220.241.50:42846). Jan 23 20:38:52.433319 sshd[4228]: Accepted publickey for core from 68.220.241.50 port 42846 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:38:52.435775 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:38:52.445754 systemd-logind[1547]: New session 14 of user core. Jan 23 20:38:52.457494 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 20:38:52.931292 sshd[4231]: Connection closed by 68.220.241.50 port 42846 Jan 23 20:38:52.930587 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 23 20:38:52.943124 systemd[1]: sshd@11-10.244.93.250:22-68.220.241.50:42846.service: Deactivated successfully. Jan 23 20:38:52.949867 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 20:38:52.952432 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Jan 23 20:38:52.954240 systemd-logind[1547]: Removed session 14. Jan 23 20:38:58.032360 systemd[1]: Started sshd@12-10.244.93.250:22-68.220.241.50:32824.service - OpenSSH per-connection server daemon (68.220.241.50:32824). Jan 23 20:38:58.612356 sshd[4244]: Accepted publickey for core from 68.220.241.50 port 32824 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:38:58.617319 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:38:58.630419 systemd-logind[1547]: New session 15 of user core. Jan 23 20:38:58.637421 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 20:38:59.162071 sshd[4247]: Connection closed by 68.220.241.50 port 32824 Jan 23 20:38:59.163406 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 23 20:38:59.174208 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Jan 23 20:38:59.175220 systemd[1]: sshd@12-10.244.93.250:22-68.220.241.50:32824.service: Deactivated successfully. Jan 23 20:38:59.179168 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 20:38:59.182622 systemd-logind[1547]: Removed session 15. Jan 23 20:38:59.265858 systemd[1]: Started sshd@13-10.244.93.250:22-68.220.241.50:32834.service - OpenSSH per-connection server daemon (68.220.241.50:32834). Jan 23 20:38:59.878450 sshd[4259]: Accepted publickey for core from 68.220.241.50 port 32834 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:38:59.881861 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:38:59.892363 systemd-logind[1547]: New session 16 of user core. Jan 23 20:38:59.903432 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 20:39:00.423210 sshd[4262]: Connection closed by 68.220.241.50 port 32834 Jan 23 20:39:00.424221 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:00.436312 systemd[1]: sshd@13-10.244.93.250:22-68.220.241.50:32834.service: Deactivated successfully. Jan 23 20:39:00.440900 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 20:39:00.443680 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Jan 23 20:39:00.449582 systemd-logind[1547]: Removed session 16. Jan 23 20:39:00.530733 systemd[1]: Started sshd@14-10.244.93.250:22-68.220.241.50:32836.service - OpenSSH per-connection server daemon (68.220.241.50:32836). Jan 23 20:39:01.138400 sshd[4271]: Accepted publickey for core from 68.220.241.50 port 32836 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:01.141379 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:01.152548 systemd-logind[1547]: New session 17 of user core. Jan 23 20:39:01.158685 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 20:39:01.647510 sshd[4274]: Connection closed by 68.220.241.50 port 32836 Jan 23 20:39:01.648395 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:01.657428 systemd[1]: sshd@14-10.244.93.250:22-68.220.241.50:32836.service: Deactivated successfully. Jan 23 20:39:01.660705 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 20:39:01.661719 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Jan 23 20:39:01.665108 systemd-logind[1547]: Removed session 17. Jan 23 20:39:06.756851 systemd[1]: Started sshd@15-10.244.93.250:22-68.220.241.50:49282.service - OpenSSH per-connection server daemon (68.220.241.50:49282). Jan 23 20:39:07.357241 sshd[4287]: Accepted publickey for core from 68.220.241.50 port 49282 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:07.360475 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:07.373969 systemd-logind[1547]: New session 18 of user core. Jan 23 20:39:07.381503 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 20:39:07.838655 sshd[4290]: Connection closed by 68.220.241.50 port 49282 Jan 23 20:39:07.840033 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:07.850238 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Jan 23 20:39:07.850913 systemd[1]: sshd@15-10.244.93.250:22-68.220.241.50:49282.service: Deactivated successfully. Jan 23 20:39:07.856063 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 20:39:07.858119 systemd-logind[1547]: Removed session 18. Jan 23 20:39:12.944576 systemd[1]: Started sshd@16-10.244.93.250:22-68.220.241.50:47568.service - OpenSSH per-connection server daemon (68.220.241.50:47568). Jan 23 20:39:13.549402 sshd[4302]: Accepted publickey for core from 68.220.241.50 port 47568 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:13.553461 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:13.563570 systemd-logind[1547]: New session 19 of user core. Jan 23 20:39:13.569460 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 20:39:14.059238 sshd[4305]: Connection closed by 68.220.241.50 port 47568 Jan 23 20:39:14.060624 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:14.070394 systemd[1]: sshd@16-10.244.93.250:22-68.220.241.50:47568.service: Deactivated successfully. Jan 23 20:39:14.074386 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 20:39:14.079024 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Jan 23 20:39:14.080303 systemd-logind[1547]: Removed session 19. Jan 23 20:39:14.168721 systemd[1]: Started sshd@17-10.244.93.250:22-68.220.241.50:47578.service - OpenSSH per-connection server daemon (68.220.241.50:47578). Jan 23 20:39:14.773913 sshd[4317]: Accepted publickey for core from 68.220.241.50 port 47578 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:14.776806 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:14.789347 systemd-logind[1547]: New session 20 of user core. Jan 23 20:39:14.796515 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 20:39:15.547379 sshd[4320]: Connection closed by 68.220.241.50 port 47578 Jan 23 20:39:15.548377 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:15.554391 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Jan 23 20:39:15.554666 systemd[1]: sshd@17-10.244.93.250:22-68.220.241.50:47578.service: Deactivated successfully. Jan 23 20:39:15.558571 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 20:39:15.562650 systemd-logind[1547]: Removed session 20. Jan 23 20:39:15.735653 systemd[1]: Started sshd@18-10.244.93.250:22-68.220.241.50:47590.service - OpenSSH per-connection server daemon (68.220.241.50:47590). Jan 23 20:39:16.345930 sshd[4330]: Accepted publickey for core from 68.220.241.50 port 47590 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:16.349141 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:16.359342 systemd-logind[1547]: New session 21 of user core. Jan 23 20:39:16.368455 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 20:39:17.467326 sshd[4333]: Connection closed by 68.220.241.50 port 47590 Jan 23 20:39:17.467661 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:17.492572 systemd[1]: sshd@18-10.244.93.250:22-68.220.241.50:47590.service: Deactivated successfully. Jan 23 20:39:17.496059 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 20:39:17.499058 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Jan 23 20:39:17.500671 systemd-logind[1547]: Removed session 21. Jan 23 20:39:17.578283 systemd[1]: Started sshd@19-10.244.93.250:22-68.220.241.50:47606.service - OpenSSH per-connection server daemon (68.220.241.50:47606). Jan 23 20:39:18.212252 sshd[4350]: Accepted publickey for core from 68.220.241.50 port 47606 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:18.215597 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:18.224673 systemd-logind[1547]: New session 22 of user core. Jan 23 20:39:18.236977 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 20:39:18.877096 sshd[4353]: Connection closed by 68.220.241.50 port 47606 Jan 23 20:39:18.880473 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:18.888672 systemd[1]: sshd@19-10.244.93.250:22-68.220.241.50:47606.service: Deactivated successfully. Jan 23 20:39:18.894982 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 20:39:18.897445 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Jan 23 20:39:18.900056 systemd-logind[1547]: Removed session 22. Jan 23 20:39:19.004076 systemd[1]: Started sshd@20-10.244.93.250:22-68.220.241.50:47610.service - OpenSSH per-connection server daemon (68.220.241.50:47610). Jan 23 20:39:19.617811 sshd[4363]: Accepted publickey for core from 68.220.241.50 port 47610 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:19.622247 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:19.634581 systemd-logind[1547]: New session 23 of user core. Jan 23 20:39:19.640477 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 20:39:20.121938 sshd[4366]: Connection closed by 68.220.241.50 port 47610 Jan 23 20:39:20.123084 sshd-session[4363]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:20.132589 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Jan 23 20:39:20.133245 systemd[1]: sshd@20-10.244.93.250:22-68.220.241.50:47610.service: Deactivated successfully. Jan 23 20:39:20.138445 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 20:39:20.143540 systemd-logind[1547]: Removed session 23. Jan 23 20:39:25.229949 systemd[1]: Started sshd@21-10.244.93.250:22-68.220.241.50:43368.service - OpenSSH per-connection server daemon (68.220.241.50:43368). Jan 23 20:39:25.820332 sshd[4381]: Accepted publickey for core from 68.220.241.50 port 43368 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:25.821012 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:25.829151 systemd-logind[1547]: New session 24 of user core. Jan 23 20:39:25.832412 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 20:39:26.320371 sshd[4384]: Connection closed by 68.220.241.50 port 43368 Jan 23 20:39:26.322779 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:26.331184 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Jan 23 20:39:26.331710 systemd[1]: sshd@21-10.244.93.250:22-68.220.241.50:43368.service: Deactivated successfully. Jan 23 20:39:26.335700 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 20:39:26.340353 systemd-logind[1547]: Removed session 24. Jan 23 20:39:31.430836 systemd[1]: Started sshd@22-10.244.93.250:22-68.220.241.50:43384.service - OpenSSH per-connection server daemon (68.220.241.50:43384). Jan 23 20:39:32.024319 sshd[4399]: Accepted publickey for core from 68.220.241.50 port 43384 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:32.028687 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:32.035943 systemd-logind[1547]: New session 25 of user core. Jan 23 20:39:32.046437 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 20:39:32.514953 sshd[4402]: Connection closed by 68.220.241.50 port 43384 Jan 23 20:39:32.517007 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:32.525114 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Jan 23 20:39:32.525603 systemd[1]: sshd@22-10.244.93.250:22-68.220.241.50:43384.service: Deactivated successfully. Jan 23 20:39:32.530073 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 20:39:32.533494 systemd-logind[1547]: Removed session 25. Jan 23 20:39:32.625591 systemd[1]: Started sshd@23-10.244.93.250:22-68.220.241.50:48138.service - OpenSSH per-connection server daemon (68.220.241.50:48138). Jan 23 20:39:33.209244 sshd[4413]: Accepted publickey for core from 68.220.241.50 port 48138 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:33.213439 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:33.222989 systemd-logind[1547]: New session 26 of user core. Jan 23 20:39:33.231477 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 20:39:35.214824 containerd[1575]: time="2026-01-23T20:39:35.214768143Z" level=info msg="StopContainer for \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" with timeout 30 (s)" Jan 23 20:39:35.217297 containerd[1575]: time="2026-01-23T20:39:35.216313295Z" level=info msg="Stop container \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" with signal terminated" Jan 23 20:39:35.270805 systemd[1]: cri-containerd-63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85.scope: Deactivated successfully. Jan 23 20:39:35.276111 containerd[1575]: time="2026-01-23T20:39:35.276058415Z" level=info msg="received container exit event container_id:\"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" id:\"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" pid:3449 exited_at:{seconds:1769200775 nanos:275120397}" Jan 23 20:39:35.305038 containerd[1575]: time="2026-01-23T20:39:35.304941365Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 20:39:35.325513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85-rootfs.mount: Deactivated successfully. Jan 23 20:39:35.328833 containerd[1575]: time="2026-01-23T20:39:35.328795860Z" level=info msg="StopContainer for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" with timeout 2 (s)" Jan 23 20:39:35.329392 containerd[1575]: time="2026-01-23T20:39:35.329374777Z" level=info msg="Stop container \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" with signal terminated" Jan 23 20:39:35.338117 containerd[1575]: time="2026-01-23T20:39:35.338091385Z" level=info msg="StopContainer for \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" returns successfully" Jan 23 20:39:35.340372 containerd[1575]: time="2026-01-23T20:39:35.340348224Z" level=info msg="StopPodSandbox for \"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\"" Jan 23 20:39:35.346601 systemd-networkd[1495]: lxc_health: Link DOWN Jan 23 20:39:35.346609 systemd-networkd[1495]: lxc_health: Lost carrier Jan 23 20:39:35.351396 containerd[1575]: time="2026-01-23T20:39:35.351330823Z" level=info msg="Container to stop \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:39:35.368607 systemd[1]: cri-containerd-a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2.scope: Deactivated successfully. Jan 23 20:39:35.368964 systemd[1]: cri-containerd-a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2.scope: Consumed 7.961s CPU time, 222.8M memory peak, 102.4M read from disk, 13.3M written to disk. Jan 23 20:39:35.372281 containerd[1575]: time="2026-01-23T20:39:35.371844582Z" level=info msg="received container exit event container_id:\"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" id:\"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" pid:3520 exited_at:{seconds:1769200775 nanos:371616119}" Jan 23 20:39:35.374860 systemd[1]: cri-containerd-bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a.scope: Deactivated successfully. Jan 23 20:39:35.381943 containerd[1575]: time="2026-01-23T20:39:35.381906870Z" level=info msg="received sandbox exit event container_id:\"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" id:\"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" exit_status:137 exited_at:{seconds:1769200775 nanos:381603513}" monitor_name=podsandbox Jan 23 20:39:35.413875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2-rootfs.mount: Deactivated successfully. Jan 23 20:39:35.418939 containerd[1575]: time="2026-01-23T20:39:35.418907354Z" level=info msg="StopContainer for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" returns successfully" Jan 23 20:39:35.420870 containerd[1575]: time="2026-01-23T20:39:35.420846228Z" level=info msg="StopPodSandbox for \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\"" Jan 23 20:39:35.421424 containerd[1575]: time="2026-01-23T20:39:35.421395319Z" level=info msg="Container to stop \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:39:35.421573 containerd[1575]: time="2026-01-23T20:39:35.421509339Z" level=info msg="Container to stop \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:39:35.421573 containerd[1575]: time="2026-01-23T20:39:35.421523673Z" level=info msg="Container to stop \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:39:35.421573 containerd[1575]: time="2026-01-23T20:39:35.421533288Z" level=info msg="Container to stop \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:39:35.421573 containerd[1575]: time="2026-01-23T20:39:35.421541814Z" level=info msg="Container to stop \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 20:39:35.431346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a-rootfs.mount: Deactivated successfully. Jan 23 20:39:35.440173 containerd[1575]: time="2026-01-23T20:39:35.440135987Z" level=info msg="shim disconnected" id=bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a namespace=k8s.io Jan 23 20:39:35.440173 containerd[1575]: time="2026-01-23T20:39:35.440169453Z" level=warning msg="cleaning up after shim disconnected" id=bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a namespace=k8s.io Jan 23 20:39:35.457642 systemd[1]: cri-containerd-34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297.scope: Deactivated successfully. Jan 23 20:39:35.461061 containerd[1575]: time="2026-01-23T20:39:35.440182187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 20:39:35.462177 containerd[1575]: time="2026-01-23T20:39:35.462121094Z" level=info msg="received sandbox exit event container_id:\"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" id:\"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" exit_status:137 exited_at:{seconds:1769200775 nanos:460658193}" monitor_name=podsandbox Jan 23 20:39:35.496370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297-rootfs.mount: Deactivated successfully. Jan 23 20:39:35.506852 containerd[1575]: time="2026-01-23T20:39:35.506813243Z" level=info msg="shim disconnected" id=34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297 namespace=k8s.io Jan 23 20:39:35.507050 containerd[1575]: time="2026-01-23T20:39:35.507035803Z" level=warning msg="cleaning up after shim disconnected" id=34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297 namespace=k8s.io Jan 23 20:39:35.507140 containerd[1575]: time="2026-01-23T20:39:35.507107394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 20:39:35.513737 containerd[1575]: time="2026-01-23T20:39:35.513501709Z" level=info msg="received sandbox container exit event sandbox_id:\"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" exit_status:137 exited_at:{seconds:1769200775 nanos:381603513}" monitor_name=criService Jan 23 20:39:35.513999 containerd[1575]: time="2026-01-23T20:39:35.513703988Z" level=info msg="TearDown network for sandbox \"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" successfully" Jan 23 20:39:35.513999 containerd[1575]: time="2026-01-23T20:39:35.513796624Z" level=info msg="StopPodSandbox for \"bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a\" returns successfully" Jan 23 20:39:35.514761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfea09d6c63f4bb064a3af926b1259dd4b8fa37e27ced0d6a8c7f7cf6159520a-shm.mount: Deactivated successfully. Jan 23 20:39:35.531112 containerd[1575]: time="2026-01-23T20:39:35.531003778Z" level=info msg="TearDown network for sandbox \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" successfully" Jan 23 20:39:35.531112 containerd[1575]: time="2026-01-23T20:39:35.531036482Z" level=info msg="StopPodSandbox for \"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" returns successfully" Jan 23 20:39:35.531479 containerd[1575]: time="2026-01-23T20:39:35.531345923Z" level=info msg="received sandbox container exit event sandbox_id:\"34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297\" exit_status:137 exited_at:{seconds:1769200775 nanos:460658193}" monitor_name=criService Jan 23 20:39:35.616892 kubelet[2871]: I0123 20:39:35.616673 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-cilium-config-path\") pod \"dba5cb81-ba86-479e-ad16-7e3dd3f5592c\" (UID: \"dba5cb81-ba86-479e-ad16-7e3dd3f5592c\") " Jan 23 20:39:35.620149 kubelet[2871]: I0123 20:39:35.618518 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4wl\" (UniqueName: \"kubernetes.io/projected/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-kube-api-access-8p4wl\") pod \"dba5cb81-ba86-479e-ad16-7e3dd3f5592c\" (UID: \"dba5cb81-ba86-479e-ad16-7e3dd3f5592c\") " Jan 23 20:39:35.630950 kubelet[2871]: I0123 20:39:35.630904 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dba5cb81-ba86-479e-ad16-7e3dd3f5592c" (UID: "dba5cb81-ba86-479e-ad16-7e3dd3f5592c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 20:39:35.648584 kubelet[2871]: I0123 20:39:35.648492 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-kube-api-access-8p4wl" (OuterVolumeSpecName: "kube-api-access-8p4wl") pod "dba5cb81-ba86-479e-ad16-7e3dd3f5592c" (UID: "dba5cb81-ba86-479e-ad16-7e3dd3f5592c"). InnerVolumeSpecName "kube-api-access-8p4wl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 20:39:35.721321 kubelet[2871]: I0123 20:39:35.720337 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cni-path\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.721321 kubelet[2871]: I0123 20:39:35.720448 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72259001-6d43-408b-9c34-d7aa5bf12ed4-clustermesh-secrets\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.721321 kubelet[2871]: I0123 20:39:35.720497 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-run\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.721321 kubelet[2871]: I0123 20:39:35.720545 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-lib-modules\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.721321 kubelet[2871]: I0123 20:39:35.720539 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cni-path" (OuterVolumeSpecName: "cni-path") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.721321 kubelet[2871]: I0123 20:39:35.720589 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r59pl\" (UniqueName: \"kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-kube-api-access-r59pl\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722023 kubelet[2871]: I0123 20:39:35.720630 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-bpf-maps\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722023 kubelet[2871]: I0123 20:39:35.720652 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.722023 kubelet[2871]: I0123 20:39:35.720676 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-config-path\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722023 kubelet[2871]: I0123 20:39:35.720714 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-etc-cni-netd\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722023 kubelet[2871]: I0123 20:39:35.720751 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-xtables-lock\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722023 kubelet[2871]: I0123 20:39:35.720791 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-cgroup\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.720831 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-hostproc\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.720881 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-hubble-tls\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.720916 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-net\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.720956 2871 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-kernel\") pod \"72259001-6d43-408b-9c34-d7aa5bf12ed4\" (UID: \"72259001-6d43-408b-9c34-d7aa5bf12ed4\") " Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.721052 2871 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-cilium-config-path\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.721086 2871 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8p4wl\" (UniqueName: \"kubernetes.io/projected/dba5cb81-ba86-479e-ad16-7e3dd3f5592c-kube-api-access-8p4wl\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.722658 kubelet[2871]: I0123 20:39:35.721112 2871 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cni-path\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.723214 kubelet[2871]: I0123 20:39:35.721136 2871 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-run\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.723214 kubelet[2871]: I0123 20:39:35.721200 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.723214 kubelet[2871]: I0123 20:39:35.721249 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.724623 kubelet[2871]: I0123 20:39:35.723397 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.724623 kubelet[2871]: I0123 20:39:35.723456 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.725459 kubelet[2871]: I0123 20:39:35.725399 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.726545 kubelet[2871]: I0123 20:39:35.726412 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.726807 kubelet[2871]: I0123 20:39:35.726742 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-hostproc" (OuterVolumeSpecName: "hostproc") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.728865 kubelet[2871]: I0123 20:39:35.728564 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 20:39:35.730876 kubelet[2871]: I0123 20:39:35.730762 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72259001-6d43-408b-9c34-d7aa5bf12ed4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 20:39:35.732135 kubelet[2871]: I0123 20:39:35.731866 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 20:39:35.736406 kubelet[2871]: I0123 20:39:35.736342 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-kube-api-access-r59pl" (OuterVolumeSpecName: "kube-api-access-r59pl") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "kube-api-access-r59pl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 20:39:35.738931 kubelet[2871]: I0123 20:39:35.738891 2871 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "72259001-6d43-408b-9c34-d7aa5bf12ed4" (UID: "72259001-6d43-408b-9c34-d7aa5bf12ed4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.821976 2871 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-cgroup\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822072 2871 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-hostproc\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822102 2871 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-hubble-tls\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822127 2871 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-net\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822154 2871 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-host-proc-sys-kernel\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822183 2871 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72259001-6d43-408b-9c34-d7aa5bf12ed4-clustermesh-secrets\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822207 2871 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-lib-modules\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.822226 kubelet[2871]: I0123 20:39:35.822229 2871 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r59pl\" (UniqueName: \"kubernetes.io/projected/72259001-6d43-408b-9c34-d7aa5bf12ed4-kube-api-access-r59pl\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.823084 kubelet[2871]: I0123 20:39:35.822253 2871 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-bpf-maps\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.823084 kubelet[2871]: I0123 20:39:35.822341 2871 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72259001-6d43-408b-9c34-d7aa5bf12ed4-cilium-config-path\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.823084 kubelet[2871]: I0123 20:39:35.822366 2871 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-etc-cni-netd\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:35.823084 kubelet[2871]: I0123 20:39:35.822388 2871 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72259001-6d43-408b-9c34-d7aa5bf12ed4-xtables-lock\") on node \"srv-zm8g6.gb1.brightbox.com\" DevicePath \"\"" Jan 23 20:39:36.324528 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34b03bb5c5cd1ba1585671d6a2cd449b9316cc722e9efb970d9e6b492af68297-shm.mount: Deactivated successfully. Jan 23 20:39:36.324804 systemd[1]: var-lib-kubelet-pods-72259001\x2d6d43\x2d408b\x2d9c34\x2dd7aa5bf12ed4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 20:39:36.324985 systemd[1]: var-lib-kubelet-pods-72259001\x2d6d43\x2d408b\x2d9c34\x2dd7aa5bf12ed4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 20:39:36.325171 systemd[1]: var-lib-kubelet-pods-dba5cb81\x2dba86\x2d479e\x2dad16\x2d7e3dd3f5592c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8p4wl.mount: Deactivated successfully. Jan 23 20:39:36.325398 systemd[1]: var-lib-kubelet-pods-72259001\x2d6d43\x2d408b\x2d9c34\x2dd7aa5bf12ed4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr59pl.mount: Deactivated successfully. Jan 23 20:39:36.385376 kubelet[2871]: I0123 20:39:36.383793 2871 scope.go:117] "RemoveContainer" containerID="63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85" Jan 23 20:39:36.396238 containerd[1575]: time="2026-01-23T20:39:36.394317603Z" level=info msg="RemoveContainer for \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\"" Jan 23 20:39:36.399686 systemd[1]: Removed slice kubepods-besteffort-poddba5cb81_ba86_479e_ad16_7e3dd3f5592c.slice - libcontainer container kubepods-besteffort-poddba5cb81_ba86_479e_ad16_7e3dd3f5592c.slice. Jan 23 20:39:36.414934 containerd[1575]: time="2026-01-23T20:39:36.414612925Z" level=info msg="RemoveContainer for \"63c9c2bcc37552a54cb6ee0f2c8eb2c8aa5e6f4a766d22526af1c67f3f64bb85\" returns successfully" Jan 23 20:39:36.415141 systemd[1]: Removed slice kubepods-burstable-pod72259001_6d43_408b_9c34_d7aa5bf12ed4.slice - libcontainer container kubepods-burstable-pod72259001_6d43_408b_9c34_d7aa5bf12ed4.slice. Jan 23 20:39:36.415693 kubelet[2871]: I0123 20:39:36.415530 2871 scope.go:117] "RemoveContainer" containerID="a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2" Jan 23 20:39:36.415323 systemd[1]: kubepods-burstable-pod72259001_6d43_408b_9c34_d7aa5bf12ed4.slice: Consumed 8.090s CPU time, 223.2M memory peak, 104.1M read from disk, 13.3M written to disk. Jan 23 20:39:36.421533 containerd[1575]: time="2026-01-23T20:39:36.421424286Z" level=info msg="RemoveContainer for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\"" Jan 23 20:39:36.427633 containerd[1575]: time="2026-01-23T20:39:36.427606682Z" level=info msg="RemoveContainer for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" returns successfully" Jan 23 20:39:36.428339 kubelet[2871]: I0123 20:39:36.427901 2871 scope.go:117] "RemoveContainer" containerID="f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe" Jan 23 20:39:36.429671 containerd[1575]: time="2026-01-23T20:39:36.429651310Z" level=info msg="RemoveContainer for \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\"" Jan 23 20:39:36.434334 containerd[1575]: time="2026-01-23T20:39:36.434289133Z" level=info msg="RemoveContainer for \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\" returns successfully" Jan 23 20:39:36.434858 kubelet[2871]: I0123 20:39:36.434782 2871 scope.go:117] "RemoveContainer" containerID="265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d" Jan 23 20:39:36.439787 containerd[1575]: time="2026-01-23T20:39:36.438743661Z" level=info msg="RemoveContainer for \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\"" Jan 23 20:39:36.441924 containerd[1575]: time="2026-01-23T20:39:36.441893985Z" level=info msg="RemoveContainer for \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\" returns successfully" Jan 23 20:39:36.442129 kubelet[2871]: I0123 20:39:36.442118 2871 scope.go:117] "RemoveContainer" containerID="3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8" Jan 23 20:39:36.443799 containerd[1575]: time="2026-01-23T20:39:36.443778302Z" level=info msg="RemoveContainer for \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\"" Jan 23 20:39:36.446363 containerd[1575]: time="2026-01-23T20:39:36.446304726Z" level=info msg="RemoveContainer for \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\" returns successfully" Jan 23 20:39:36.446567 kubelet[2871]: I0123 20:39:36.446512 2871 scope.go:117] "RemoveContainer" containerID="385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744" Jan 23 20:39:36.448129 containerd[1575]: time="2026-01-23T20:39:36.448108404Z" level=info msg="RemoveContainer for \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\"" Jan 23 20:39:36.450422 containerd[1575]: time="2026-01-23T20:39:36.450402028Z" level=info msg="RemoveContainer for \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\" returns successfully" Jan 23 20:39:36.450607 kubelet[2871]: I0123 20:39:36.450584 2871 scope.go:117] "RemoveContainer" containerID="a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2" Jan 23 20:39:36.451059 containerd[1575]: time="2026-01-23T20:39:36.450962754Z" level=error msg="ContainerStatus for \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\": not found" Jan 23 20:39:36.451418 kubelet[2871]: E0123 20:39:36.451393 2871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\": not found" containerID="a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2" Jan 23 20:39:36.451765 kubelet[2871]: I0123 20:39:36.451595 2871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2"} err="failed to get container status \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8310f0d2a9074c530b113062e33b44e475e88101ed2c1283f1f045c7ceeefe2\": not found" Jan 23 20:39:36.451765 kubelet[2871]: I0123 20:39:36.451733 2871 scope.go:117] "RemoveContainer" containerID="f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe" Jan 23 20:39:36.453413 containerd[1575]: time="2026-01-23T20:39:36.452925533Z" level=error msg="ContainerStatus for \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\": not found" Jan 23 20:39:36.464103 kubelet[2871]: E0123 20:39:36.463151 2871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\": not found" containerID="f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe" Jan 23 20:39:36.464103 kubelet[2871]: I0123 20:39:36.463207 2871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe"} err="failed to get container status \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5fbdb154f5f3e584085f9c4c7fbcde54aca0c4b4d762cb79a1ea7aa609449fe\": not found" Jan 23 20:39:36.464103 kubelet[2871]: I0123 20:39:36.463338 2871 scope.go:117] "RemoveContainer" containerID="265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d" Jan 23 20:39:36.464103 kubelet[2871]: E0123 20:39:36.464036 2871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\": not found" containerID="265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d" Jan 23 20:39:36.464354 containerd[1575]: time="2026-01-23T20:39:36.463703682Z" level=error msg="ContainerStatus for \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\": not found" Jan 23 20:39:36.464391 kubelet[2871]: I0123 20:39:36.464103 2871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d"} err="failed to get container status \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"265f201457d4eb4f3fdfc4ee5e85a0aad78690f9f3aab96f817dd6a61a626d3d\": not found" Jan 23 20:39:36.464391 kubelet[2871]: I0123 20:39:36.464136 2871 scope.go:117] "RemoveContainer" containerID="3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8" Jan 23 20:39:36.464598 containerd[1575]: time="2026-01-23T20:39:36.464544297Z" level=error msg="ContainerStatus for \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\": not found" Jan 23 20:39:36.464852 kubelet[2871]: E0123 20:39:36.464816 2871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\": not found" containerID="3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8" Jan 23 20:39:36.464906 kubelet[2871]: I0123 20:39:36.464880 2871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8"} err="failed to get container status \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3683f6c31b8ab1a3b08645ea521f65d5e90e66313812d07d314718ed7c5024f8\": not found" Jan 23 20:39:36.464939 kubelet[2871]: I0123 20:39:36.464912 2871 scope.go:117] "RemoveContainer" containerID="385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744" Jan 23 20:39:36.465207 containerd[1575]: time="2026-01-23T20:39:36.465184439Z" level=error msg="ContainerStatus for \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\": not found" Jan 23 20:39:36.465453 kubelet[2871]: E0123 20:39:36.465412 2871 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\": not found" containerID="385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744" Jan 23 20:39:36.465453 kubelet[2871]: I0123 20:39:36.465435 2871 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744"} err="failed to get container status \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\": rpc error: code = NotFound desc = an error occurred when try to find container \"385ff7e7acf9811dd7e16e11ec67b8e66927233417a54938763cd3dcc1645744\": not found" Jan 23 20:39:36.815319 kubelet[2871]: I0123 20:39:36.814996 2871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72259001-6d43-408b-9c34-d7aa5bf12ed4" path="/var/lib/kubelet/pods/72259001-6d43-408b-9c34-d7aa5bf12ed4/volumes" Jan 23 20:39:36.817853 kubelet[2871]: I0123 20:39:36.817807 2871 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba5cb81-ba86-479e-ad16-7e3dd3f5592c" path="/var/lib/kubelet/pods/dba5cb81-ba86-479e-ad16-7e3dd3f5592c/volumes" Jan 23 20:39:36.936987 kubelet[2871]: E0123 20:39:36.936888 2871 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 20:39:37.235765 sshd[4416]: Connection closed by 68.220.241.50 port 48138 Jan 23 20:39:37.237646 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:37.247161 systemd[1]: sshd@23-10.244.93.250:22-68.220.241.50:48138.service: Deactivated successfully. Jan 23 20:39:37.249841 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 20:39:37.250233 systemd[1]: session-26.scope: Consumed 1.053s CPU time, 26.6M memory peak. Jan 23 20:39:37.250913 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Jan 23 20:39:37.253091 systemd-logind[1547]: Removed session 26. Jan 23 20:39:37.340552 systemd[1]: Started sshd@24-10.244.93.250:22-68.220.241.50:48150.service - OpenSSH per-connection server daemon (68.220.241.50:48150). Jan 23 20:39:37.958816 sshd[4564]: Accepted publickey for core from 68.220.241.50 port 48150 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:37.962760 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:37.973809 systemd-logind[1547]: New session 27 of user core. Jan 23 20:39:37.982600 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 20:39:38.879500 systemd[1]: Created slice kubepods-burstable-pod9b8e974a_4e60_45a6_90e8_cfb1ba314d61.slice - libcontainer container kubepods-burstable-pod9b8e974a_4e60_45a6_90e8_cfb1ba314d61.slice. Jan 23 20:39:38.920332 sshd[4567]: Connection closed by 68.220.241.50 port 48150 Jan 23 20:39:38.921003 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:38.927574 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Jan 23 20:39:38.928319 systemd[1]: sshd@24-10.244.93.250:22-68.220.241.50:48150.service: Deactivated successfully. Jan 23 20:39:38.931856 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 20:39:38.934243 systemd-logind[1547]: Removed session 27. Jan 23 20:39:39.029957 systemd[1]: Started sshd@25-10.244.93.250:22-68.220.241.50:48156.service - OpenSSH per-connection server daemon (68.220.241.50:48156). Jan 23 20:39:39.045543 kubelet[2871]: I0123 20:39:39.045447 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-cni-path\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.046502 kubelet[2871]: I0123 20:39:39.045560 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-xtables-lock\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.046502 kubelet[2871]: I0123 20:39:39.045617 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-clustermesh-secrets\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.046502 kubelet[2871]: I0123 20:39:39.045662 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-host-proc-sys-kernel\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.046502 kubelet[2871]: I0123 20:39:39.045708 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-cilium-cgroup\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.046502 kubelet[2871]: I0123 20:39:39.045752 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954tr\" (UniqueName: \"kubernetes.io/projected/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-kube-api-access-954tr\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047050 kubelet[2871]: I0123 20:39:39.045794 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-etc-cni-netd\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047050 kubelet[2871]: I0123 20:39:39.045840 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-lib-modules\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047050 kubelet[2871]: I0123 20:39:39.045878 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-cilium-ipsec-secrets\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047050 kubelet[2871]: I0123 20:39:39.045917 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-hubble-tls\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047050 kubelet[2871]: I0123 20:39:39.045958 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-cilium-run\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047050 kubelet[2871]: I0123 20:39:39.045999 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-hostproc\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047586 kubelet[2871]: I0123 20:39:39.046036 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-cilium-config-path\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047586 kubelet[2871]: I0123 20:39:39.046098 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-bpf-maps\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.047586 kubelet[2871]: I0123 20:39:39.046158 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b8e974a-4e60-45a6-90e8-cfb1ba314d61-host-proc-sys-net\") pod \"cilium-sw424\" (UID: \"9b8e974a-4e60-45a6-90e8-cfb1ba314d61\") " pod="kube-system/cilium-sw424" Jan 23 20:39:39.186594 containerd[1575]: time="2026-01-23T20:39:39.185719521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw424,Uid:9b8e974a-4e60-45a6-90e8-cfb1ba314d61,Namespace:kube-system,Attempt:0,}" Jan 23 20:39:39.205596 containerd[1575]: time="2026-01-23T20:39:39.205543580Z" level=info msg="connecting to shim 8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd" address="unix:///run/containerd/s/7baff29e815f7d50437bd76f05e200a072103cf3bfca72766a62d7b27b6c1846" namespace=k8s.io protocol=ttrpc version=3 Jan 23 20:39:39.234430 systemd[1]: Started cri-containerd-8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd.scope - libcontainer container 8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd. Jan 23 20:39:39.267118 containerd[1575]: time="2026-01-23T20:39:39.267062170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw424,Uid:9b8e974a-4e60-45a6-90e8-cfb1ba314d61,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\"" Jan 23 20:39:39.278050 containerd[1575]: time="2026-01-23T20:39:39.278006593Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 20:39:39.284308 containerd[1575]: time="2026-01-23T20:39:39.283644757Z" level=info msg="Container 1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:39:39.287351 containerd[1575]: time="2026-01-23T20:39:39.287318018Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f\"" Jan 23 20:39:39.287840 containerd[1575]: time="2026-01-23T20:39:39.287816382Z" level=info msg="StartContainer for \"1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f\"" Jan 23 20:39:39.289926 containerd[1575]: time="2026-01-23T20:39:39.289899277Z" level=info msg="connecting to shim 1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f" address="unix:///run/containerd/s/7baff29e815f7d50437bd76f05e200a072103cf3bfca72766a62d7b27b6c1846" protocol=ttrpc version=3 Jan 23 20:39:39.311502 systemd[1]: Started cri-containerd-1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f.scope - libcontainer container 1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f. Jan 23 20:39:39.368383 containerd[1575]: time="2026-01-23T20:39:39.368334040Z" level=info msg="StartContainer for \"1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f\" returns successfully" Jan 23 20:39:39.385430 systemd[1]: cri-containerd-1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f.scope: Deactivated successfully. Jan 23 20:39:39.386334 systemd[1]: cri-containerd-1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f.scope: Consumed 34ms CPU time, 9.7M memory peak, 3.2M read from disk. Jan 23 20:39:39.388886 containerd[1575]: time="2026-01-23T20:39:39.388807185Z" level=info msg="received container exit event container_id:\"1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f\" id:\"1b622eb273f8465acbe4c2e02a2073d029d78782eac957bc76be65359212c95f\" pid:4642 exited_at:{seconds:1769200779 nanos:388245811}" Jan 23 20:39:39.629844 sshd[4577]: Accepted publickey for core from 68.220.241.50 port 48156 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:39.632793 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:39.638322 systemd-logind[1547]: New session 28 of user core. Jan 23 20:39:39.647438 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 20:39:40.018592 kubelet[2871]: I0123 20:39:40.018251 2871 setters.go:618] "Node became not ready" node="srv-zm8g6.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T20:39:40Z","lastTransitionTime":"2026-01-23T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 20:39:40.034767 sshd[4674]: Connection closed by 68.220.241.50 port 48156 Jan 23 20:39:40.035648 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:40.042673 systemd[1]: sshd@25-10.244.93.250:22-68.220.241.50:48156.service: Deactivated successfully. Jan 23 20:39:40.047157 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 20:39:40.048621 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Jan 23 20:39:40.052866 systemd-logind[1547]: Removed session 28. Jan 23 20:39:40.144611 systemd[1]: Started sshd@26-10.244.93.250:22-68.220.241.50:48162.service - OpenSSH per-connection server daemon (68.220.241.50:48162). Jan 23 20:39:40.437931 containerd[1575]: time="2026-01-23T20:39:40.436204804Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 20:39:40.447068 containerd[1575]: time="2026-01-23T20:39:40.447009808Z" level=info msg="Container c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:39:40.456555 containerd[1575]: time="2026-01-23T20:39:40.456503288Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4\"" Jan 23 20:39:40.458374 containerd[1575]: time="2026-01-23T20:39:40.457688204Z" level=info msg="StartContainer for \"c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4\"" Jan 23 20:39:40.459201 containerd[1575]: time="2026-01-23T20:39:40.459176719Z" level=info msg="connecting to shim c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4" address="unix:///run/containerd/s/7baff29e815f7d50437bd76f05e200a072103cf3bfca72766a62d7b27b6c1846" protocol=ttrpc version=3 Jan 23 20:39:40.495598 systemd[1]: Started cri-containerd-c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4.scope - libcontainer container c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4. Jan 23 20:39:40.552842 containerd[1575]: time="2026-01-23T20:39:40.552763595Z" level=info msg="StartContainer for \"c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4\" returns successfully" Jan 23 20:39:40.563375 systemd[1]: cri-containerd-c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4.scope: Deactivated successfully. Jan 23 20:39:40.564166 systemd[1]: cri-containerd-c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4.scope: Consumed 32ms CPU time, 7.4M memory peak, 2.2M read from disk. Jan 23 20:39:40.565901 containerd[1575]: time="2026-01-23T20:39:40.565869919Z" level=info msg="received container exit event container_id:\"c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4\" id:\"c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4\" pid:4697 exited_at:{seconds:1769200780 nanos:564753817}" Jan 23 20:39:40.589557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4cf4c5e55358c51aac9e85267b84430c00c54d7cb4a959b6991be9106811af4-rootfs.mount: Deactivated successfully. Jan 23 20:39:40.734471 sshd[4681]: Accepted publickey for core from 68.220.241.50 port 48162 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 20:39:40.737113 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 20:39:40.750371 systemd-logind[1547]: New session 29 of user core. Jan 23 20:39:40.757481 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 20:39:41.445745 containerd[1575]: time="2026-01-23T20:39:41.445607869Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 20:39:41.458729 containerd[1575]: time="2026-01-23T20:39:41.456695582Z" level=info msg="Container 7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:39:41.467556 containerd[1575]: time="2026-01-23T20:39:41.467523891Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0\"" Jan 23 20:39:41.469457 containerd[1575]: time="2026-01-23T20:39:41.469429298Z" level=info msg="StartContainer for \"7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0\"" Jan 23 20:39:41.471211 containerd[1575]: time="2026-01-23T20:39:41.471186559Z" level=info msg="connecting to shim 7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0" address="unix:///run/containerd/s/7baff29e815f7d50437bd76f05e200a072103cf3bfca72766a62d7b27b6c1846" protocol=ttrpc version=3 Jan 23 20:39:41.498438 systemd[1]: Started cri-containerd-7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0.scope - libcontainer container 7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0. Jan 23 20:39:41.586195 containerd[1575]: time="2026-01-23T20:39:41.586144695Z" level=info msg="StartContainer for \"7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0\" returns successfully" Jan 23 20:39:41.591151 systemd[1]: cri-containerd-7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0.scope: Deactivated successfully. Jan 23 20:39:41.591995 systemd[1]: cri-containerd-7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0.scope: Consumed 33ms CPU time, 5.8M memory peak, 1.1M read from disk. Jan 23 20:39:41.593922 containerd[1575]: time="2026-01-23T20:39:41.593884083Z" level=info msg="received container exit event container_id:\"7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0\" id:\"7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0\" pid:4746 exited_at:{seconds:1769200781 nanos:593663897}" Jan 23 20:39:41.629473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c4809e8b283a23b9c143b229abe5d8fc4ab93a272109f35c361e56a794c68b0-rootfs.mount: Deactivated successfully. Jan 23 20:39:41.939357 kubelet[2871]: E0123 20:39:41.939196 2871 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 20:39:42.456501 containerd[1575]: time="2026-01-23T20:39:42.456445279Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 20:39:42.474287 containerd[1575]: time="2026-01-23T20:39:42.472823967Z" level=info msg="Container 9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:39:42.482794 containerd[1575]: time="2026-01-23T20:39:42.482730669Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c\"" Jan 23 20:39:42.484457 containerd[1575]: time="2026-01-23T20:39:42.484426959Z" level=info msg="StartContainer for \"9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c\"" Jan 23 20:39:42.485360 containerd[1575]: time="2026-01-23T20:39:42.485330054Z" level=info msg="connecting to shim 9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c" address="unix:///run/containerd/s/7baff29e815f7d50437bd76f05e200a072103cf3bfca72766a62d7b27b6c1846" protocol=ttrpc version=3 Jan 23 20:39:42.514517 systemd[1]: Started cri-containerd-9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c.scope - libcontainer container 9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c. Jan 23 20:39:42.551265 systemd[1]: cri-containerd-9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c.scope: Deactivated successfully. Jan 23 20:39:42.554591 containerd[1575]: time="2026-01-23T20:39:42.554539586Z" level=info msg="received container exit event container_id:\"9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c\" id:\"9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c\" pid:4786 exited_at:{seconds:1769200782 nanos:554110405}" Jan 23 20:39:42.558707 containerd[1575]: time="2026-01-23T20:39:42.558571041Z" level=info msg="StartContainer for \"9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c\" returns successfully" Jan 23 20:39:42.588386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9925dcf5dbd0fdcc5acaf2856a89780476760104764c5ed9e7928b7c4d7d116c-rootfs.mount: Deactivated successfully. Jan 23 20:39:43.468075 containerd[1575]: time="2026-01-23T20:39:43.467565281Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 20:39:43.484305 containerd[1575]: time="2026-01-23T20:39:43.483065837Z" level=info msg="Container d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f: CDI devices from CRI Config.CDIDevices: []" Jan 23 20:39:43.496316 containerd[1575]: time="2026-01-23T20:39:43.496090259Z" level=info msg="CreateContainer within sandbox \"8c57b3f800da910a4bc0da93d2bab803cade2778fa3b42d9b5909f59feb82afd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f\"" Jan 23 20:39:43.497676 containerd[1575]: time="2026-01-23T20:39:43.497513776Z" level=info msg="StartContainer for \"d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f\"" Jan 23 20:39:43.499602 containerd[1575]: time="2026-01-23T20:39:43.499436118Z" level=info msg="connecting to shim d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f" address="unix:///run/containerd/s/7baff29e815f7d50437bd76f05e200a072103cf3bfca72766a62d7b27b6c1846" protocol=ttrpc version=3 Jan 23 20:39:43.529442 systemd[1]: Started cri-containerd-d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f.scope - libcontainer container d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f. Jan 23 20:39:43.593889 containerd[1575]: time="2026-01-23T20:39:43.593839350Z" level=info msg="StartContainer for \"d86e63e14ec301cf875fe352b290f7f69249ce89006ce4dd5b8174555670268f\" returns successfully" Jan 23 20:39:44.092415 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 20:39:44.501546 kubelet[2871]: I0123 20:39:44.500892 2871 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sw424" podStartSLOduration=6.500871021 podStartE2EDuration="6.500871021s" podCreationTimestamp="2026-01-23 20:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 20:39:44.500526087 +0000 UTC m=+137.886913047" watchObservedRunningTime="2026-01-23 20:39:44.500871021 +0000 UTC m=+137.887257955" Jan 23 20:39:47.452254 systemd-networkd[1495]: lxc_health: Link UP Jan 23 20:39:47.503139 systemd-networkd[1495]: lxc_health: Gained carrier Jan 23 20:39:49.339658 systemd-networkd[1495]: lxc_health: Gained IPv6LL Jan 23 20:39:54.436120 sshd[4728]: Connection closed by 68.220.241.50 port 48162 Jan 23 20:39:54.438072 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Jan 23 20:39:54.451946 systemd[1]: sshd@26-10.244.93.250:22-68.220.241.50:48162.service: Deactivated successfully. Jan 23 20:39:54.454694 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 20:39:54.456059 systemd-logind[1547]: Session 29 logged out. Waiting for processes to exit. Jan 23 20:39:54.461232 systemd-logind[1547]: Removed session 29.