Jan 28 02:08:21.472753 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:30:15 -00 2026 Jan 28 02:08:21.472783 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 02:08:21.472798 kernel: BIOS-provided physical RAM map: Jan 28 02:08:21.472808 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 02:08:21.472815 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 02:08:21.472823 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 02:08:21.472832 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 02:08:21.472840 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 02:08:21.472847 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 02:08:21.472856 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 02:08:21.472865 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 02:08:21.472878 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 02:08:21.472888 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 02:08:21.472896 kernel: NX (Execute Disable) protection: active Jan 28 02:08:21.472905 kernel: APIC: Static calls initialized Jan 28 02:08:21.472914 kernel: SMBIOS 2.8 present. Jan 28 02:08:21.472926 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 02:08:21.472935 kernel: DMI: Memory slots populated: 1/1 Jan 28 02:08:21.472945 kernel: Hypervisor detected: KVM Jan 28 02:08:21.472956 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 02:08:21.472964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 02:08:21.472973 kernel: kvm-clock: using sched offset of 17783831483 cycles Jan 28 02:08:21.472982 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 02:08:21.472991 kernel: tsc: Detected 2445.426 MHz processor Jan 28 02:08:21.473001 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 02:08:21.473012 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 02:08:21.473025 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 02:08:21.473034 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 02:08:21.473042 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 02:08:21.473051 kernel: Using GB pages for direct mapping Jan 28 02:08:21.473061 kernel: ACPI: Early table checksum verification disabled Jan 28 02:08:21.473071 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 02:08:21.473081 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473090 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473098 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473110 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 02:08:21.473119 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473128 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473137 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473146 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:08:21.473161 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 02:08:21.473174 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 02:08:21.473186 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 02:08:21.473195 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 02:08:21.473207 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 02:08:21.473217 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 02:08:21.473228 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 02:08:21.473240 kernel: No NUMA configuration found Jan 28 02:08:21.473249 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 02:08:21.473264 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 28 02:08:21.473273 kernel: Zone ranges: Jan 28 02:08:21.473282 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 02:08:21.473291 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 02:08:21.473300 kernel: Normal empty Jan 28 02:08:21.473310 kernel: Device empty Jan 28 02:08:21.473319 kernel: Movable zone start for each node Jan 28 02:08:21.473329 kernel: Early memory node ranges Jan 28 02:08:21.473338 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 02:08:21.473350 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 02:08:21.473359 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 02:08:21.473369 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 02:08:21.473380 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 02:08:21.473391 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 02:08:21.473402 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 02:08:21.473413 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 02:08:21.473425 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 02:08:21.473436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 02:08:21.473452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 02:08:21.473463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 02:08:21.473475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 02:08:21.473487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 02:08:21.473639 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 02:08:21.475830 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 02:08:21.475844 kernel: TSC deadline timer available Jan 28 02:08:21.475854 kernel: CPU topo: Max. logical packages: 1 Jan 28 02:08:21.475864 kernel: CPU topo: Max. logical dies: 1 Jan 28 02:08:21.475881 kernel: CPU topo: Max. dies per package: 1 Jan 28 02:08:21.475890 kernel: CPU topo: Max. threads per core: 1 Jan 28 02:08:21.475900 kernel: CPU topo: Num. cores per package: 4 Jan 28 02:08:21.475909 kernel: CPU topo: Num. threads per package: 4 Jan 28 02:08:21.475917 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 02:08:21.475927 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 02:08:21.475938 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 02:08:21.475949 kernel: kvm-guest: setup PV sched yield Jan 28 02:08:21.475960 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 02:08:21.475976 kernel: Booting paravirtualized kernel on KVM Jan 28 02:08:21.475987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 02:08:21.475999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 02:08:21.476010 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 02:08:21.476021 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 02:08:21.476033 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 02:08:21.476044 kernel: kvm-guest: PV spinlocks enabled Jan 28 02:08:21.476055 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 02:08:21.476065 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 02:08:21.476079 kernel: random: crng init done Jan 28 02:08:21.476088 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 02:08:21.476097 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 02:08:21.476109 kernel: Fallback order for Node 0: 0 Jan 28 02:08:21.476122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 28 02:08:21.476131 kernel: Policy zone: DMA32 Jan 28 02:08:21.476140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 02:08:21.476149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 02:08:21.476158 kernel: ftrace: allocating 40097 entries in 157 pages Jan 28 02:08:21.476173 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 02:08:21.476185 kernel: Dynamic Preempt: voluntary Jan 28 02:08:21.476194 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 02:08:21.476204 kernel: rcu: RCU event tracing is enabled. Jan 28 02:08:21.476214 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 02:08:21.476224 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 02:08:21.476233 kernel: Rude variant of Tasks RCU enabled. Jan 28 02:08:21.476242 kernel: Tracing variant of Tasks RCU enabled. Jan 28 02:08:21.476253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 02:08:21.476266 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 02:08:21.476278 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 02:08:21.476289 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 02:08:21.476299 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 02:08:21.476311 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 02:08:21.476321 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 02:08:21.476346 kernel: Console: colour VGA+ 80x25 Jan 28 02:08:21.476356 kernel: printk: legacy console [ttyS0] enabled Jan 28 02:08:21.476365 kernel: ACPI: Core revision 20240827 Jan 28 02:08:21.476375 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 02:08:21.476456 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 02:08:21.476474 kernel: x2apic enabled Jan 28 02:08:21.476486 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 02:08:21.476626 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 02:08:21.476638 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 02:08:21.476725 kernel: kvm-guest: setup PV IPIs Jan 28 02:08:21.476743 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 02:08:21.476753 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 02:08:21.476763 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 02:08:21.476772 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 02:08:21.476782 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 02:08:21.476794 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 02:08:21.476806 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 02:08:21.476816 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 02:08:21.476825 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 02:08:21.476839 kernel: Speculative Store Bypass: Vulnerable Jan 28 02:08:21.476848 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 02:08:21.476859 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 02:08:21.476869 kernel: active return thunk: srso_alias_return_thunk Jan 28 02:08:21.476879 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 02:08:21.476891 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 02:08:21.476901 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 02:08:21.476913 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 02:08:21.476927 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 02:08:21.476940 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 02:08:21.476951 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 02:08:21.476962 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 02:08:21.476975 kernel: Freeing SMP alternatives memory: 32K Jan 28 02:08:21.476985 kernel: pid_max: default: 32768 minimum: 301 Jan 28 02:08:21.476997 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 02:08:21.477007 kernel: landlock: Up and running. Jan 28 02:08:21.477016 kernel: SELinux: Initializing. Jan 28 02:08:21.477029 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 02:08:21.477039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 02:08:21.477049 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 02:08:21.477059 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 02:08:21.477069 kernel: signal: max sigframe size: 1776 Jan 28 02:08:21.477080 kernel: rcu: Hierarchical SRCU implementation. Jan 28 02:08:21.477090 kernel: rcu: Max phase no-delay instances is 400. Jan 28 02:08:21.477100 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 02:08:21.477109 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 02:08:21.477121 kernel: smp: Bringing up secondary CPUs ... Jan 28 02:08:21.477133 kernel: smpboot: x86: Booting SMP configuration: Jan 28 02:08:21.477144 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 02:08:21.477156 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 02:08:21.477168 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 02:08:21.477180 kernel: Memory: 2420712K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145100K reserved, 0K cma-reserved) Jan 28 02:08:21.477192 kernel: devtmpfs: initialized Jan 28 02:08:21.477203 kernel: x86/mm: Memory block size: 128MB Jan 28 02:08:21.477215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 02:08:21.477232 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 02:08:21.477244 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 02:08:21.477254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 02:08:21.477264 kernel: audit: initializing netlink subsys (disabled) Jan 28 02:08:21.477273 kernel: audit: type=2000 audit(1769566085.535:1): state=initialized audit_enabled=0 res=1 Jan 28 02:08:21.477283 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 02:08:21.477294 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 02:08:21.477306 kernel: cpuidle: using governor menu Jan 28 02:08:21.477317 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 02:08:21.477331 kernel: dca service started, version 1.12.1 Jan 28 02:08:21.477341 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 02:08:21.477350 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 02:08:21.477361 kernel: PCI: Using configuration type 1 for base access Jan 28 02:08:21.477374 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 02:08:21.477384 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 02:08:21.477393 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 02:08:21.477404 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 02:08:21.477415 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 02:08:21.477429 kernel: ACPI: Added _OSI(Module Device) Jan 28 02:08:21.477440 kernel: ACPI: Added _OSI(Processor Device) Jan 28 02:08:21.477452 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 02:08:21.477462 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 02:08:21.477474 kernel: ACPI: Interpreter enabled Jan 28 02:08:21.477486 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 02:08:21.477635 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 02:08:21.478817 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 02:08:21.478834 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 02:08:21.478849 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 02:08:21.478859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 02:08:21.479104 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 02:08:21.479277 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 02:08:21.479813 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 02:08:21.479831 kernel: PCI host bridge to bus 0000:00 Jan 28 02:08:21.480008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 02:08:21.480165 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 02:08:21.480314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 02:08:21.480467 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 02:08:21.484375 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 02:08:21.485136 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 02:08:21.485369 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 02:08:21.488064 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 02:08:21.488263 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 02:08:21.490263 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 28 02:08:21.490804 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 28 02:08:21.490976 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 28 02:08:21.491157 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 02:08:21.491327 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 26367 usecs Jan 28 02:08:21.494069 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 02:08:21.494253 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 02:08:21.494644 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 28 02:08:21.494887 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 02:08:21.496846 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 02:08:21.497106 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 28 02:08:21.498210 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 28 02:08:21.498406 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 02:08:21.501944 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 02:08:21.502132 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 28 02:08:21.502301 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 28 02:08:21.504879 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 02:08:21.505060 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 28 02:08:21.505251 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 02:08:21.505740 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 02:08:21.505924 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 12695 usecs Jan 28 02:08:21.506098 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 02:08:21.506264 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 28 02:08:21.506434 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 28 02:08:21.508013 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 02:08:21.508206 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 02:08:21.508228 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 02:08:21.508240 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 02:08:21.508249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 02:08:21.508260 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 02:08:21.508270 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 02:08:21.508280 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 02:08:21.508292 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 02:08:21.508302 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 02:08:21.508321 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 02:08:21.508331 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 02:08:21.508343 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 02:08:21.508355 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 02:08:21.508366 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 02:08:21.508378 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 02:08:21.508458 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 02:08:21.508470 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 02:08:21.508480 kernel: iommu: Default domain type: Translated Jan 28 02:08:21.508631 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 02:08:21.508642 kernel: PCI: Using ACPI for IRQ routing Jan 28 02:08:21.510824 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 02:08:21.510838 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 02:08:21.510849 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 02:08:21.511044 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 02:08:21.511211 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 02:08:21.511384 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 02:08:21.511479 kernel: vgaarb: loaded Jan 28 02:08:21.511628 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 02:08:21.511642 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 02:08:21.511787 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 02:08:21.511799 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 02:08:21.511812 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 02:08:21.511821 kernel: pnp: PnP ACPI init Jan 28 02:08:21.512006 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 02:08:21.512025 kernel: pnp: PnP ACPI: found 6 devices Jan 28 02:08:21.512037 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 02:08:21.512052 kernel: NET: Registered PF_INET protocol family Jan 28 02:08:21.512062 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 02:08:21.512072 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 02:08:21.512082 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 02:08:21.512092 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 02:08:21.512101 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 02:08:21.512111 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 02:08:21.512121 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 02:08:21.512134 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 02:08:21.512143 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 02:08:21.512153 kernel: NET: Registered PF_XDP protocol family Jan 28 02:08:21.512341 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 02:08:21.513981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 02:08:21.514283 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 02:08:21.524330 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 02:08:21.527095 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 02:08:21.527275 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 02:08:21.527304 kernel: PCI: CLS 0 bytes, default 64 Jan 28 02:08:21.527317 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 02:08:21.527328 kernel: Initialise system trusted keyrings Jan 28 02:08:21.527338 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 02:08:21.527348 kernel: Key type asymmetric registered Jan 28 02:08:21.527358 kernel: Asymmetric key parser 'x509' registered Jan 28 02:08:21.527368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 02:08:21.527380 kernel: io scheduler mq-deadline registered Jan 28 02:08:21.527463 kernel: io scheduler kyber registered Jan 28 02:08:21.527484 kernel: io scheduler bfq registered Jan 28 02:08:21.527627 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 02:08:21.527640 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 02:08:21.527732 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 02:08:21.527745 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 02:08:21.527758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 02:08:21.527770 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 02:08:21.527782 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 02:08:21.527794 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 02:08:21.527813 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 02:08:21.528022 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 02:08:21.528041 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 02:08:21.528215 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 02:08:21.529851 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T02:08:18 UTC (1769566098) Jan 28 02:08:21.530016 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 02:08:21.530032 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 02:08:21.530051 kernel: NET: Registered PF_INET6 protocol family Jan 28 02:08:21.530062 kernel: Segment Routing with IPv6 Jan 28 02:08:21.530074 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 02:08:21.530085 kernel: NET: Registered PF_PACKET protocol family Jan 28 02:08:21.530096 kernel: Key type dns_resolver registered Jan 28 02:08:21.530109 kernel: IPI shorthand broadcast: enabled Jan 28 02:08:21.530120 kernel: sched_clock: Marking stable (9032077691, 2752320828)->(13376751767, -1592353248) Jan 28 02:08:21.530131 kernel: registered taskstats version 1 Jan 28 02:08:21.530144 kernel: Loading compiled-in X.509 certificates Jan 28 02:08:21.530155 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 31c1e06975b690596c927b070a4cb9e218a3417b' Jan 28 02:08:21.530171 kernel: Demotion targets for Node 0: null Jan 28 02:08:21.530181 kernel: Key type .fscrypt registered Jan 28 02:08:21.530191 kernel: Key type fscrypt-provisioning registered Jan 28 02:08:21.530201 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 02:08:21.530211 kernel: ima: Allocated hash algorithm: sha1 Jan 28 02:08:21.530221 kernel: ima: No architecture policies found Jan 28 02:08:21.530230 kernel: clk: Disabling unused clocks Jan 28 02:08:21.530242 kernel: Warning: unable to open an initial console. Jan 28 02:08:21.530255 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 28 02:08:21.530266 kernel: Write protecting the kernel read-only data: 40960k Jan 28 02:08:21.530276 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 28 02:08:21.530287 kernel: Run /init as init process Jan 28 02:08:21.530298 kernel: with arguments: Jan 28 02:08:21.530310 kernel: /init Jan 28 02:08:21.530322 kernel: with environment: Jan 28 02:08:21.530334 kernel: HOME=/ Jan 28 02:08:21.530346 kernel: TERM=linux Jan 28 02:08:21.530361 systemd[1]: Successfully made /usr/ read-only. Jan 28 02:08:21.530383 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 02:08:21.530397 systemd[1]: Detected virtualization kvm. Jan 28 02:08:21.530410 systemd[1]: Detected architecture x86-64. Jan 28 02:08:21.530421 systemd[1]: Running in initrd. Jan 28 02:08:21.530431 systemd[1]: No hostname configured, using default hostname. Jan 28 02:08:21.530442 systemd[1]: Hostname set to . Jan 28 02:08:21.530458 systemd[1]: Initializing machine ID from VM UUID. Jan 28 02:08:21.530488 systemd[1]: Queued start job for default target initrd.target. Jan 28 02:08:21.533910 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:08:21.533927 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:08:21.533942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 02:08:21.533953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:08:21.533972 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 02:08:21.533986 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 02:08:21.534001 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 02:08:21.534012 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 02:08:21.534023 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:08:21.534034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:08:21.534044 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:08:21.534059 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:08:21.534069 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:08:21.534081 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:08:21.534092 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 02:08:21.534102 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 02:08:21.534114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 02:08:21.534124 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 02:08:21.534137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:08:21.534150 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:08:21.534168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:08:21.534181 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:08:21.534192 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 02:08:21.534203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:08:21.534214 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 02:08:21.534227 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 02:08:21.534240 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 02:08:21.534251 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:08:21.534266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:08:21.534278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:08:21.534346 systemd-journald[203]: Collecting audit messages is disabled. Jan 28 02:08:21.534385 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 02:08:21.534472 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:08:21.534489 systemd-journald[203]: Journal started Jan 28 02:08:21.534725 systemd-journald[203]: Runtime Journal (/run/log/journal/14972f5768ec4cb9963911b59a48d06f) is 6M, max 48.3M, 42.2M free. Jan 28 02:08:21.539334 systemd-modules-load[205]: Inserted module 'overlay' Jan 28 02:08:21.581625 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:08:21.601205 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 02:08:21.635952 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 02:08:21.714973 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:08:21.784489 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:08:21.794801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:08:23.022947 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 02:08:23.023008 kernel: Bridge firewalling registered Jan 28 02:08:21.853300 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 02:08:22.055311 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 28 02:08:22.965794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:08:23.089305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:08:23.163376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:08:23.174250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:08:23.277415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:08:23.304450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:08:23.399095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:08:23.427193 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:08:23.514300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:08:23.541451 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 02:08:23.685961 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 02:08:23.709004 systemd-resolved[243]: Positive Trust Anchors: Jan 28 02:08:23.709018 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:08:23.709063 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:08:23.725308 systemd-resolved[243]: Defaulting to hostname 'linux'. Jan 28 02:08:23.728090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:08:23.776878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:08:24.372990 kernel: SCSI subsystem initialized Jan 28 02:08:24.399642 kernel: Loading iSCSI transport class v2.0-870. Jan 28 02:08:24.462616 kernel: iscsi: registered transport (tcp) Jan 28 02:08:24.536363 kernel: iscsi: registered transport (qla4xxx) Jan 28 02:08:24.536445 kernel: QLogic iSCSI HBA Driver Jan 28 02:08:24.632182 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 02:08:24.692377 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 02:08:24.712393 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 02:08:24.979295 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 02:08:24.986047 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 02:08:25.175181 kernel: raid6: avx2x4 gen() 20733 MB/s Jan 28 02:08:25.198012 kernel: raid6: avx2x2 gen() 20352 MB/s Jan 28 02:08:25.235879 kernel: raid6: avx2x1 gen() 9569 MB/s Jan 28 02:08:25.236401 kernel: raid6: using algorithm avx2x4 gen() 20733 MB/s Jan 28 02:08:25.279121 kernel: raid6: .... xor() 3462 MB/s, rmw enabled Jan 28 02:08:25.279207 kernel: raid6: using avx2x2 recovery algorithm Jan 28 02:08:25.316902 kernel: xor: automatically using best checksumming function avx Jan 28 02:08:26.493650 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 02:08:26.563106 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 02:08:26.598806 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:08:26.764666 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 28 02:08:26.801898 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:08:26.863438 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 02:08:27.036055 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 28 02:08:27.301239 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 02:08:27.356448 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:08:27.752843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:08:27.900037 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 02:08:28.329927 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 02:08:28.330104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:08:28.378656 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:08:28.480243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:08:28.576117 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 02:08:28.576415 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 02:08:28.526691 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 02:08:28.679338 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 02:08:28.679378 kernel: GPT:9289727 != 19775487 Jan 28 02:08:28.679393 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 02:08:28.679408 kernel: GPT:9289727 != 19775487 Jan 28 02:08:28.685975 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 02:08:28.700442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:08:28.772881 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 02:08:29.062251 kernel: libata version 3.00 loaded. Jan 28 02:08:29.241176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 02:08:30.749957 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 28 02:08:30.750004 kernel: AES CTR mode by8 optimization enabled Jan 28 02:08:30.750023 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 02:08:30.750298 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 02:08:30.750327 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 02:08:30.750707 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 02:08:30.751015 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 02:08:30.751229 kernel: scsi host0: ahci Jan 28 02:08:30.751441 kernel: scsi host1: ahci Jan 28 02:08:30.751907 kernel: scsi host2: ahci Jan 28 02:08:30.752123 kernel: scsi host3: ahci Jan 28 02:08:30.752344 kernel: scsi host4: ahci Jan 28 02:08:30.763679 kernel: scsi host5: ahci Jan 28 02:08:30.764084 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 28 02:08:30.764105 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 28 02:08:30.764119 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 28 02:08:30.764132 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 28 02:08:30.764146 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 28 02:08:30.764170 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 28 02:08:30.764184 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 02:08:30.764197 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 02:08:30.764211 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 02:08:30.764223 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 02:08:30.764236 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 02:08:30.764250 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 02:08:30.764263 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 02:08:30.764276 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 02:08:30.764293 kernel: ata3.00: applying bridge limits Jan 28 02:08:30.764307 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 02:08:30.764320 kernel: ata3.00: configured for UDMA/100 Jan 28 02:08:30.764333 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 02:08:30.764847 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 02:08:30.765055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 02:08:30.765073 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 02:08:30.794431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:08:30.890259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 02:08:30.954702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:08:30.981075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 02:08:31.011935 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 02:08:31.081686 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 02:08:31.254204 disk-uuid[627]: Primary Header is updated. Jan 28 02:08:31.254204 disk-uuid[627]: Secondary Entries is updated. Jan 28 02:08:31.254204 disk-uuid[627]: Secondary Header is updated. Jan 28 02:08:31.311174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:08:31.704384 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 02:08:31.755055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 02:08:31.804212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:08:31.827954 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:08:31.873150 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 02:08:32.031044 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 02:08:32.361278 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:08:32.386000 disk-uuid[628]: The operation has completed successfully. Jan 28 02:08:32.656055 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 02:08:32.656406 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 02:08:32.859223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 02:08:32.979725 sh[652]: Success Jan 28 02:08:33.165711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 02:08:33.165877 kernel: device-mapper: uevent: version 1.0.3 Jan 28 02:08:33.190829 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 02:08:33.360718 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 28 02:08:33.647085 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 02:08:33.734091 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 02:08:33.794154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 02:08:33.861010 kernel: BTRFS: device fsid 4389fb68-1fd1-4240-9a3a-21ed56363b72 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (664) Jan 28 02:08:33.897323 kernel: BTRFS info (device dm-0): first mount of filesystem 4389fb68-1fd1-4240-9a3a-21ed56363b72 Jan 28 02:08:33.897479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:08:34.036658 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 02:08:34.036747 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 02:08:34.052361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 02:08:34.072904 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 02:08:34.134354 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 02:08:34.189703 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 02:08:34.216449 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 02:08:34.481870 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (697) Jan 28 02:08:34.513485 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 02:08:34.513696 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:08:34.608716 kernel: BTRFS info (device vda6): turning on async discard Jan 28 02:08:34.608885 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 02:08:34.662741 kernel: BTRFS info (device vda6): last unmount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 02:08:34.701282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 02:08:34.732230 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 02:08:35.169664 ignition[754]: Ignition 2.22.0 Jan 28 02:08:35.169738 ignition[754]: Stage: fetch-offline Jan 28 02:08:35.169873 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:08:35.169887 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 02:08:35.170022 ignition[754]: parsed url from cmdline: "" Jan 28 02:08:35.170030 ignition[754]: no config URL provided Jan 28 02:08:35.170040 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 02:08:35.170052 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 28 02:08:35.172935 ignition[754]: op(1): [started] loading QEMU firmware config module Jan 28 02:08:35.172943 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 02:08:35.290007 ignition[754]: op(1): [finished] loading QEMU firmware config module Jan 28 02:08:35.683143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 02:08:35.732653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:08:36.075895 systemd-networkd[841]: lo: Link UP Jan 28 02:08:36.075985 systemd-networkd[841]: lo: Gained carrier Jan 28 02:08:36.088959 systemd-networkd[841]: Enumeration completed Jan 28 02:08:36.095146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:08:36.101732 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:08:36.101739 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:08:36.117037 systemd-networkd[841]: eth0: Link UP Jan 28 02:08:36.117321 systemd-networkd[841]: eth0: Gained carrier Jan 28 02:08:36.117342 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:08:36.147664 systemd[1]: Reached target network.target - Network. Jan 28 02:08:36.326428 systemd-networkd[841]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 02:08:37.603393 ignition[754]: parsing config with SHA512: aee19d1d04a526bf0b38e2db009c233ed09a02d4eb16bba4b63afa468a3c2a60bedc26a30c518f1cc93c177cd1e4141fa81571c1c1b4fbc91dd101352219c279 Jan 28 02:08:37.643097 unknown[754]: fetched base config from "system" Jan 28 02:08:37.643116 unknown[754]: fetched user config from "qemu" Jan 28 02:08:37.657133 ignition[754]: fetch-offline: fetch-offline passed Jan 28 02:08:37.657260 ignition[754]: Ignition finished successfully Jan 28 02:08:37.670382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 02:08:37.688926 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 02:08:37.699907 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 02:08:37.878435 ignition[846]: Ignition 2.22.0 Jan 28 02:08:37.882198 ignition[846]: Stage: kargs Jan 28 02:08:37.882454 ignition[846]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:08:37.882472 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 02:08:37.929755 ignition[846]: kargs: kargs passed Jan 28 02:08:37.930031 ignition[846]: Ignition finished successfully Jan 28 02:08:37.962148 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 02:08:37.985974 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 02:08:38.079472 systemd-networkd[841]: eth0: Gained IPv6LL Jan 28 02:08:38.154336 ignition[854]: Ignition 2.22.0 Jan 28 02:08:38.155479 ignition[854]: Stage: disks Jan 28 02:08:38.156922 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:08:38.156942 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 02:08:38.158377 ignition[854]: disks: disks passed Jan 28 02:08:38.159154 ignition[854]: Ignition finished successfully Jan 28 02:08:38.211428 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 02:08:38.239779 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 02:08:38.271356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 02:08:38.299733 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:08:38.311035 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:08:38.330371 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:08:38.378315 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 02:08:38.539432 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 28 02:08:38.591385 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 02:08:38.636243 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 02:08:40.002015 kernel: EXT4-fs (vda9): mounted filesystem 0c920277-6cf2-4276-8e4c-1a9645be49e7 r/w with ordered data mode. Quota mode: none. Jan 28 02:08:40.005432 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 02:08:40.032247 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 02:08:40.104242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 02:08:40.160136 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 02:08:40.208758 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 02:08:40.211368 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 02:08:40.211413 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 02:08:40.452656 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Jan 28 02:08:40.452704 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 02:08:40.452719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:08:40.293032 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 02:08:40.377340 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 02:08:40.515930 kernel: BTRFS info (device vda6): turning on async discard Jan 28 02:08:40.515978 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 02:08:40.518217 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 02:08:40.700638 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 02:08:40.742773 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Jan 28 02:08:40.773718 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 02:08:40.861196 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 02:08:41.691799 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 02:08:41.706314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 02:08:41.783684 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 02:08:41.826172 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 02:08:41.851022 kernel: BTRFS info (device vda6): last unmount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 02:08:41.894459 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 02:08:41.947380 ignition[987]: INFO : Ignition 2.22.0 Jan 28 02:08:41.947380 ignition[987]: INFO : Stage: mount Jan 28 02:08:41.947380 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:08:41.947380 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 02:08:41.994942 ignition[987]: INFO : mount: mount passed Jan 28 02:08:41.994942 ignition[987]: INFO : Ignition finished successfully Jan 28 02:08:42.012268 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 02:08:42.039135 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 02:08:42.117154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 02:08:42.227672 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1000) Jan 28 02:08:42.262102 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 02:08:42.262259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:08:42.312753 kernel: BTRFS info (device vda6): turning on async discard Jan 28 02:08:42.312952 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 02:08:42.320191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 02:08:42.499087 ignition[1017]: INFO : Ignition 2.22.0 Jan 28 02:08:42.508790 ignition[1017]: INFO : Stage: files Jan 28 02:08:42.508790 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:08:42.508790 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 02:08:42.549189 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping Jan 28 02:08:42.562435 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 02:08:42.581247 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 02:08:42.603097 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 02:08:42.603097 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 02:08:42.636799 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 02:08:42.636799 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 02:08:42.636799 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 02:08:42.624769 unknown[1017]: wrote ssh authorized keys file for user: core Jan 28 02:08:42.754216 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 02:08:42.852317 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 02:08:42.872250 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 02:08:42.872250 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 28 02:08:43.008741 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 02:08:43.174357 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 02:08:43.194794 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 28 02:08:43.529155 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 02:08:44.015333 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 02:08:44.015333 ignition[1017]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 28 02:08:44.055740 ignition[1017]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 02:08:44.092828 ignition[1017]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 02:08:44.092828 ignition[1017]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 28 02:08:44.092828 ignition[1017]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 28 02:08:44.142675 ignition[1017]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 02:08:44.142675 ignition[1017]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 02:08:44.142675 ignition[1017]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 28 02:08:44.142675 ignition[1017]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 02:08:44.265139 ignition[1017]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 02:08:44.298793 ignition[1017]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 02:08:44.316704 ignition[1017]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 02:08:44.316704 ignition[1017]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 28 02:08:44.316704 ignition[1017]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 02:08:44.316704 ignition[1017]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 02:08:44.316704 ignition[1017]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 02:08:44.316704 ignition[1017]: INFO : files: files passed Jan 28 02:08:44.316704 ignition[1017]: INFO : Ignition finished successfully Jan 28 02:08:44.348051 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 02:08:44.366723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 02:08:44.466445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 02:08:44.481282 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 02:08:44.481399 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 02:08:44.564027 initrd-setup-root-after-ignition[1047]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 02:08:44.585033 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:08:44.605208 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:08:44.605208 initrd-setup-root-after-ignition[1049]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:08:44.644232 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 02:08:44.658719 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 02:08:44.696397 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 02:08:44.825087 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 02:08:44.825466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 02:08:44.834720 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 02:08:44.856783 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 02:08:44.876258 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 02:08:44.879239 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 02:08:44.988772 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 02:08:45.018405 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 02:08:45.131732 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:08:45.153726 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:08:45.196370 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 02:08:45.209283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 02:08:45.210287 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 02:08:45.241370 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 02:08:45.282198 systemd[1]: Stopped target basic.target - Basic System. Jan 28 02:08:45.294748 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 02:08:45.313130 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 02:08:45.336429 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 02:08:45.345313 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 02:08:45.419363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 02:08:45.433726 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 02:08:45.461239 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 02:08:45.576439 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 02:08:45.594303 systemd[1]: Stopped target swap.target - Swaps. Jan 28 02:08:45.637775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 02:08:45.638064 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 02:08:45.676062 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:08:45.693202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:08:45.730473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 02:08:45.787952 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:08:45.810212 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 02:08:45.810407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 02:08:45.909683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 02:08:45.910120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 02:08:45.944834 systemd[1]: Stopped target paths.target - Path Units. Jan 28 02:08:45.968821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 02:08:45.996010 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:08:46.084310 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 02:08:46.128350 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 02:08:46.146995 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 02:08:46.147144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 02:08:46.171713 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 02:08:46.171970 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 02:08:46.172186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 02:08:46.172347 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 02:08:46.172709 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 02:08:46.172841 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 02:08:46.507286 ignition[1073]: INFO : Ignition 2.22.0 Jan 28 02:08:46.507286 ignition[1073]: INFO : Stage: umount Jan 28 02:08:46.507286 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:08:46.507286 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 02:08:46.507286 ignition[1073]: INFO : umount: umount passed Jan 28 02:08:46.507286 ignition[1073]: INFO : Ignition finished successfully Jan 28 02:08:46.174747 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 02:08:46.197952 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 02:08:46.198203 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:08:46.221391 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 02:08:46.337475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 02:08:46.338104 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:08:46.372702 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 02:08:46.372979 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 02:08:46.493310 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 02:08:46.498306 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 02:08:46.498777 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 02:08:46.525177 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 02:08:46.525429 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 02:08:46.596823 systemd[1]: Stopped target network.target - Network. Jan 28 02:08:46.601277 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 02:08:46.601394 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 02:08:46.704331 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 02:08:46.704441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 02:08:46.704987 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 02:08:46.705062 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 02:08:46.810120 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 02:08:46.810223 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 02:08:46.836974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 02:08:46.837098 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 02:08:46.926226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 02:08:46.938159 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 02:08:46.961396 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 02:08:46.961721 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 02:08:47.026083 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 02:08:47.026349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 02:08:47.137242 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 28 02:08:47.141267 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 02:08:47.141446 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 02:08:47.174975 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 28 02:08:47.177058 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 02:08:47.189005 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 02:08:47.189098 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:08:47.303420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 02:08:47.391207 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 02:08:47.391343 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 02:08:47.392088 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 02:08:47.392163 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:08:47.468002 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 02:08:47.468187 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 02:08:47.502763 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 02:08:47.502984 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:08:47.661964 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:08:47.814225 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 02:08:47.814345 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 28 02:08:47.842127 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 02:08:47.843277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:08:47.863468 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 02:08:47.863761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 02:08:47.939707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 02:08:47.939790 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:08:47.958009 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 02:08:47.958232 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 02:08:48.059158 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 02:08:48.059273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 02:08:48.096040 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 02:08:48.096153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:08:48.128978 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 02:08:48.184350 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 02:08:48.184482 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 02:08:48.210742 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 02:08:48.210840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:08:48.238818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 02:08:48.239037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:08:48.335033 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 28 02:08:48.335134 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 28 02:08:48.335208 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 02:08:48.337105 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 02:08:48.337366 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 02:08:48.605412 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 02:08:48.607185 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 02:08:48.670757 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 02:08:48.731420 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 02:08:48.820322 systemd[1]: Switching root. Jan 28 02:08:48.931189 systemd-journald[203]: Journal stopped Jan 28 02:08:54.993117 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 28 02:08:54.993207 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 02:08:54.993236 kernel: SELinux: policy capability open_perms=1 Jan 28 02:08:54.993261 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 02:08:54.993276 kernel: SELinux: policy capability always_check_network=0 Jan 28 02:08:54.993291 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 02:08:54.993306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 02:08:54.993323 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 02:08:54.993345 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 02:08:54.993362 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 02:08:54.993376 kernel: audit: type=1403 audit(1769566129.591:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 02:08:54.993392 systemd[1]: Successfully loaded SELinux policy in 247.057ms. Jan 28 02:08:54.993431 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 49.117ms. Jan 28 02:08:54.993452 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 02:08:54.993473 systemd[1]: Detected virtualization kvm. Jan 28 02:08:54.993489 systemd[1]: Detected architecture x86-64. Jan 28 02:08:54.993846 systemd[1]: Detected first boot. Jan 28 02:08:54.993871 systemd[1]: Initializing machine ID from VM UUID. Jan 28 02:08:54.993889 zram_generator::config[1118]: No configuration found. Jan 28 02:08:54.993905 kernel: Guest personality initialized and is inactive Jan 28 02:08:54.993923 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 02:08:54.994040 kernel: Initialized host personality Jan 28 02:08:54.994057 kernel: NET: Registered PF_VSOCK protocol family Jan 28 02:08:54.994072 systemd[1]: Populated /etc with preset unit settings. Jan 28 02:08:54.994088 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 28 02:08:54.994105 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 02:08:54.994127 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 02:08:54.994145 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 02:08:54.994164 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 02:08:54.994188 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 02:08:54.994204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 02:08:54.994219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 02:08:54.994237 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 02:08:54.994256 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 02:08:54.994278 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 02:08:54.994293 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 02:08:54.994309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:08:54.994325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:08:54.994344 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 02:08:54.994365 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 02:08:54.994381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 02:08:54.994397 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:08:54.994416 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 02:08:54.994441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:08:54.994461 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:08:54.994476 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 02:08:54.994814 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 02:08:54.994840 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 02:08:54.994860 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 02:08:54.994876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:08:54.994891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:08:54.994911 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:08:54.995026 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:08:54.995049 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 02:08:54.995068 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 02:08:54.995086 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 02:08:54.995101 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:08:54.995117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:08:54.995131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:08:54.995150 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 02:08:54.995178 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 02:08:54.995194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 02:08:54.995209 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 02:08:54.995224 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:08:54.995242 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 02:08:54.995260 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 02:08:54.995278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 02:08:54.995295 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 02:08:54.995315 systemd[1]: Reached target machines.target - Containers. Jan 28 02:08:54.995330 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 02:08:54.995349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:08:54.995366 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:08:54.995383 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 02:08:54.995399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:08:54.995414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:08:54.995430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:08:54.995448 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 02:08:54.995471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:08:54.995487 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 02:08:54.995886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 02:08:54.995907 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 02:08:54.995924 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 02:08:54.996047 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 02:08:54.996065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 02:08:54.996085 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:08:54.996109 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:08:54.996125 kernel: ACPI: bus type drm_connector registered Jan 28 02:08:54.996140 kernel: fuse: init (API version 7.41) Jan 28 02:08:54.996155 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 02:08:54.996173 kernel: loop: module loaded Jan 28 02:08:54.996190 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 02:08:54.996206 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 02:08:54.996281 systemd-journald[1203]: Collecting audit messages is disabled. Jan 28 02:08:54.996315 systemd-journald[1203]: Journal started Jan 28 02:08:54.996349 systemd-journald[1203]: Runtime Journal (/run/log/journal/14972f5768ec4cb9963911b59a48d06f) is 6M, max 48.3M, 42.2M free. Jan 28 02:08:52.369214 systemd[1]: Queued start job for default target multi-user.target. Jan 28 02:08:52.411484 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 02:08:52.415457 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 02:08:52.416348 systemd[1]: systemd-journald.service: Consumed 2.760s CPU time. Jan 28 02:08:55.048691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:08:55.078724 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 02:08:55.078817 systemd[1]: Stopped verity-setup.service. Jan 28 02:08:55.137923 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:08:55.161879 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:08:55.184030 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 02:08:55.197473 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 02:08:55.220031 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 02:08:55.242237 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 02:08:55.256117 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 02:08:55.272805 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 02:08:55.295775 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 02:08:55.324319 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:08:55.351781 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 02:08:55.353262 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 02:08:55.371362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:08:55.374228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:08:55.390747 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:08:55.391241 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:08:55.404679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:08:55.405245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:08:55.421355 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 02:08:55.422201 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 02:08:55.435242 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:08:55.435803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:08:55.451276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:08:55.471805 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 02:08:55.498445 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 02:08:55.516488 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 02:08:55.539364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:08:55.601434 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 02:08:55.633356 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 02:08:55.659239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 02:08:55.681044 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 02:08:55.681106 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:08:55.685719 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 02:08:55.736426 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 02:08:55.750415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:08:55.756445 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 02:08:55.777071 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 02:08:55.799099 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:08:55.802769 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 02:08:55.820359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:08:55.823303 systemd-journald[1203]: Time spent on flushing to /var/log/journal/14972f5768ec4cb9963911b59a48d06f is 47.368ms for 978 entries. Jan 28 02:08:55.823303 systemd-journald[1203]: System Journal (/var/log/journal/14972f5768ec4cb9963911b59a48d06f) is 8M, max 195.6M, 187.6M free. Jan 28 02:08:55.904279 systemd-journald[1203]: Received client request to flush runtime journal. Jan 28 02:08:55.823932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:08:55.867442 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 02:08:55.904059 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 02:08:55.937385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 02:08:55.955748 kernel: loop0: detected capacity change from 0 to 128560 Jan 28 02:08:55.969690 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 02:08:56.017707 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 02:08:56.039305 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 02:08:56.060905 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:08:56.085377 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 02:08:56.088416 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 02:08:56.119417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 02:08:56.143859 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 02:08:56.161334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:08:56.167692 kernel: loop1: detected capacity change from 0 to 219144 Jan 28 02:08:56.296916 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 02:08:56.306176 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 02:08:56.393783 kernel: loop2: detected capacity change from 0 to 110984 Jan 28 02:08:56.399163 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 28 02:08:56.399185 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 28 02:08:56.420768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:08:56.569144 kernel: loop3: detected capacity change from 0 to 128560 Jan 28 02:08:56.686637 kernel: loop4: detected capacity change from 0 to 219144 Jan 28 02:08:56.788283 kernel: loop5: detected capacity change from 0 to 110984 Jan 28 02:08:56.864366 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 02:08:56.866110 (sd-merge)[1261]: Merged extensions into '/usr'. Jan 28 02:08:56.883400 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 02:08:56.883798 systemd[1]: Reloading... Jan 28 02:08:57.142375 zram_generator::config[1290]: No configuration found. Jan 28 02:08:57.711142 systemd[1]: Reloading finished in 826 ms. Jan 28 02:08:57.804458 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 02:08:57.837323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 02:08:57.896740 systemd[1]: Starting ensure-sysext.service... Jan 28 02:08:57.941845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:08:57.967834 ldconfig[1233]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 02:08:57.994901 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:08:58.089841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 02:08:58.118487 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 02:08:58.123779 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Jan 28 02:08:58.123806 systemd[1]: Reloading... Jan 28 02:08:58.126064 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 02:08:58.126791 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 02:08:58.127448 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 02:08:58.137219 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 02:08:58.137935 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 28 02:08:58.139365 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 28 02:08:58.149861 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:08:58.149883 systemd-tmpfiles[1325]: Skipping /boot Jan 28 02:08:58.173484 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:08:58.173747 systemd-tmpfiles[1325]: Skipping /boot Jan 28 02:08:58.198287 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 28 02:08:58.291761 zram_generator::config[1351]: No configuration found. Jan 28 02:08:58.906745 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 02:08:58.942839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 02:08:59.006935 kernel: ACPI: button: Power Button [PWRF] Jan 28 02:08:59.072644 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 02:08:59.094686 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 02:08:59.151810 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 02:08:59.152309 systemd[1]: Reloading finished in 1022 ms. Jan 28 02:08:59.167417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:08:59.217171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:08:59.305718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:08:59.338770 systemd[1]: Finished ensure-sysext.service. Jan 28 02:08:59.365081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:08:59.369469 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 02:08:59.469410 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 02:08:59.488684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:08:59.588874 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:08:59.608244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:08:59.631802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:08:59.660128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:08:59.676144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:08:59.689078 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 02:08:59.707318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 02:08:59.718695 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 02:08:59.755946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:08:59.797879 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:08:59.805844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 02:08:59.837365 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 02:08:59.876233 augenrules[1477]: No rules Jan 28 02:08:59.877162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:08:59.893311 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:08:59.897741 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 02:08:59.904076 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 02:08:59.925890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:08:59.926682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:08:59.948842 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:08:59.950220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:08:59.971738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 02:08:59.972913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:08:59.973362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:08:59.986143 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:08:59.986446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:08:59.988198 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 02:09:00.001281 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 02:09:00.074827 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 02:09:00.081186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:09:00.081356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:09:00.084089 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 02:09:00.142279 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 02:09:00.168208 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 02:09:00.176187 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 02:09:00.636778 kernel: kvm_amd: TSC scaling supported Jan 28 02:09:00.636870 kernel: kvm_amd: Nested Virtualization enabled Jan 28 02:09:00.636892 kernel: kvm_amd: Nested Paging enabled Jan 28 02:09:00.636910 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 02:09:00.636927 kernel: kvm_amd: PMU virtualization is disabled Jan 28 02:09:01.086881 systemd-networkd[1469]: lo: Link UP Jan 28 02:09:01.086892 systemd-networkd[1469]: lo: Gained carrier Jan 28 02:09:01.088675 systemd-resolved[1470]: Positive Trust Anchors: Jan 28 02:09:01.088775 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:09:01.088815 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:09:01.091308 systemd-networkd[1469]: Enumeration completed Jan 28 02:09:01.096284 systemd-resolved[1470]: Defaulting to hostname 'linux'. Jan 28 02:09:01.098861 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:09:01.098875 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:09:01.101689 systemd-networkd[1469]: eth0: Link UP Jan 28 02:09:01.102393 systemd-networkd[1469]: eth0: Gained carrier Jan 28 02:09:01.102421 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:09:01.134743 systemd-networkd[1469]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 02:09:01.136446 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Jan 28 02:09:01.141345 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 02:09:01.141696 systemd-timesyncd[1473]: Initial clock synchronization to Wed 2026-01-28 02:09:01.077622 UTC. Jan 28 02:09:01.199885 kernel: EDAC MC: Ver: 3.0.0 Jan 28 02:09:01.562919 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 02:09:01.579824 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 02:09:01.606472 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:09:01.632199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:09:01.652324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:09:01.672412 systemd[1]: Reached target network.target - Network. Jan 28 02:09:01.684249 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:09:01.698978 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:09:01.712156 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 02:09:01.739266 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 02:09:01.755404 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 02:09:01.774349 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 02:09:01.796389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 02:09:01.796688 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:09:01.811427 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 02:09:01.843290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 02:09:01.866420 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 02:09:01.886468 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:09:01.914841 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 02:09:01.953135 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 02:09:01.981979 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 02:09:02.003677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 02:09:02.037199 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 02:09:02.078805 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 02:09:02.097395 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 02:09:02.134892 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 02:09:02.170866 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 02:09:02.186197 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 02:09:02.210401 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:09:02.238379 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:09:02.255202 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:09:02.255341 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:09:02.260871 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 02:09:02.304187 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 02:09:02.344260 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 02:09:02.378359 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 02:09:02.409052 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 02:09:02.435412 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 02:09:02.440941 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 02:09:02.458787 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 02:09:02.472045 jq[1518]: false Jan 28 02:09:02.480054 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 02:09:02.497484 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 28 02:09:02.495352 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 28 02:09:02.498393 extend-filesystems[1519]: Found /dev/vda6 Jan 28 02:09:02.532338 extend-filesystems[1519]: Found /dev/vda9 Jan 28 02:09:02.555176 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 28 02:09:02.555176 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 02:09:02.555176 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 28 02:09:02.541401 oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 28 02:09:02.541431 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 02:09:02.541699 oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 28 02:09:02.555642 extend-filesystems[1519]: Checking size of /dev/vda9 Jan 28 02:09:02.564049 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 28 02:09:02.561750 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 02:09:02.596081 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 28 02:09:02.596081 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 02:09:02.564069 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 02:09:02.598957 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 02:09:02.632891 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 02:09:02.652958 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 02:09:02.654835 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 02:09:02.662215 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 02:09:02.666936 extend-filesystems[1519]: Resized partition /dev/vda9 Jan 28 02:09:02.689070 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 02:09:02.760408 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 02:09:02.677799 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 02:09:02.743755 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 02:09:02.777777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 02:09:02.803317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 02:09:02.840228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 02:09:02.841206 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 02:09:02.841899 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 02:09:02.883821 jq[1544]: true Jan 28 02:09:02.857475 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 02:09:02.889406 update_engine[1541]: I20260128 02:09:02.887423 1541 main.cc:92] Flatcar Update Engine starting Jan 28 02:09:02.858861 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 02:09:02.881179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 02:09:02.881841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 02:09:02.942825 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 02:09:02.970350 jq[1550]: true Jan 28 02:09:02.973338 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 02:09:02.988674 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 02:09:02.988674 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 02:09:02.988674 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 02:09:03.083306 extend-filesystems[1519]: Resized filesystem in /dev/vda9 Jan 28 02:09:02.991348 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 02:09:03.097365 tar[1549]: linux-amd64/LICENSE Jan 28 02:09:03.097365 tar[1549]: linux-amd64/helm Jan 28 02:09:03.036437 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 02:09:03.103037 systemd-networkd[1469]: eth0: Gained IPv6LL Jan 28 02:09:03.157802 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 02:09:03.159166 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 02:09:03.159202 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 02:09:03.161862 systemd-logind[1534]: New seat seat0. Jan 28 02:09:03.177058 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 02:09:03.194251 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 02:09:03.226450 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 02:09:03.234081 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 02:09:03.256340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:09:03.274201 dbus-daemon[1516]: [system] SELinux support is enabled Jan 28 02:09:03.277201 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 02:09:03.285254 update_engine[1541]: I20260128 02:09:03.284801 1541 update_check_scheduler.cc:74] Next update check in 9m9s Jan 28 02:09:03.291892 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 02:09:03.324376 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 02:09:03.324690 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 02:09:03.339343 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 02:09:03.339376 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 02:09:03.342827 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 02:09:03.370190 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Jan 28 02:09:03.373172 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 02:09:03.402414 systemd[1]: Started update-engine.service - Update Engine. Jan 28 02:09:03.440277 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 02:09:03.451143 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 02:09:03.477110 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 02:09:03.534380 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 02:09:03.553289 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 02:09:03.568320 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 02:09:03.569097 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 02:09:03.592300 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 02:09:03.644138 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 02:09:03.645107 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 02:09:03.674231 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 02:09:03.693838 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 02:09:03.724783 containerd[1551]: time="2026-01-28T02:09:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 02:09:03.729866 containerd[1551]: time="2026-01-28T02:09:03.729812570Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 28 02:09:03.739220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 02:09:03.767918 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 02:09:03.768800 containerd[1551]: time="2026-01-28T02:09:03.767921660Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.251µs" Jan 28 02:09:03.768800 containerd[1551]: time="2026-01-28T02:09:03.767963323Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 02:09:03.768800 containerd[1551]: time="2026-01-28T02:09:03.767987089Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 02:09:03.768800 containerd[1551]: time="2026-01-28T02:09:03.768318949Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 02:09:03.768800 containerd[1551]: time="2026-01-28T02:09:03.768353853Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 02:09:03.768800 containerd[1551]: time="2026-01-28T02:09:03.768393179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 02:09:03.768989 containerd[1551]: time="2026-01-28T02:09:03.768477543Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 02:09:03.769047 containerd[1551]: time="2026-01-28T02:09:03.769029271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 02:09:03.770295 containerd[1551]: time="2026-01-28T02:09:03.770267898Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 02:09:03.773784 containerd[1551]: time="2026-01-28T02:09:03.773647155Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 02:09:03.773879 containerd[1551]: time="2026-01-28T02:09:03.773862104Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 02:09:03.773957 containerd[1551]: time="2026-01-28T02:09:03.773941316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 02:09:03.774261 containerd[1551]: time="2026-01-28T02:09:03.774235876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 02:09:03.775760 containerd[1551]: time="2026-01-28T02:09:03.775732350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 02:09:03.775877 containerd[1551]: time="2026-01-28T02:09:03.775858285Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 02:09:03.775936 containerd[1551]: time="2026-01-28T02:09:03.775922296Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 02:09:03.776014 containerd[1551]: time="2026-01-28T02:09:03.775999621Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 02:09:03.776850 containerd[1551]: time="2026-01-28T02:09:03.776826975Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 02:09:03.776986 containerd[1551]: time="2026-01-28T02:09:03.776969141Z" level=info msg="metadata content store policy set" policy=shared Jan 28 02:09:03.801191 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 02:09:03.829876 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.831699997Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.831890471Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832028245Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832055514Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832073330Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832088282Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832104453Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832121191Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832136792Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832151843Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832165797Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832183465Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832390248Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 02:09:03.834389 containerd[1551]: time="2026-01-28T02:09:03.832437609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832471287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832685027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832709132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832724144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832739706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832753530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832768861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832791878Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.832807380Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.833220018Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.833252946Z" level=info msg="Start snapshots syncer" Jan 28 02:09:03.840327 containerd[1551]: time="2026-01-28T02:09:03.833285106Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 02:09:03.840832 containerd[1551]: time="2026-01-28T02:09:03.833785811Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 02:09:03.840832 containerd[1551]: time="2026-01-28T02:09:03.833850590Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837071105Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837248752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837295056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837311784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837325768Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837349464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837363917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837376314Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837404182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837424874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837445634Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837485401Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837698732Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 02:09:03.842738 containerd[1551]: time="2026-01-28T02:09:03.837710609Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837722767Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837732778Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837746563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837771455Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837792047Z" level=info msg="runtime interface created" Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837801061Z" level=info msg="created NRI interface" Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837815812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837831653Z" level=info msg="Connect containerd service" Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.837862177Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 02:09:03.851481 containerd[1551]: time="2026-01-28T02:09:03.842688242Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.096764026Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.096883171Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.096916107Z" level=info msg="Start subscribing containerd event" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097044877Z" level=info msg="Start recovering state" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097167077Z" level=info msg="Start event monitor" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097186945Z" level=info msg="Start cni network conf syncer for default" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097197687Z" level=info msg="Start streaming server" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097215688Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097226220Z" level=info msg="runtime interface starting up..." Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097236673Z" level=info msg="starting plugins..." Jan 28 02:09:04.124311 containerd[1551]: time="2026-01-28T02:09:04.097258708Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 02:09:04.098076 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 02:09:04.128479 tar[1549]: linux-amd64/README.md Jan 28 02:09:04.134769 containerd[1551]: time="2026-01-28T02:09:04.132047880Z" level=info msg="containerd successfully booted in 0.408774s" Jan 28 02:09:04.224750 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 02:09:05.821895 kernel: hrtimer: interrupt took 4507096 ns Jan 28 02:09:06.778484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:09:06.816194 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 02:09:06.835398 systemd[1]: Startup finished in 9.428s (kernel) + 30.306s (initrd) + 17.481s (userspace) = 57.215s. Jan 28 02:09:06.841811 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:09:13.420361 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 6195795454 wd_nsec: 6195792643 Jan 28 02:09:14.929963 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 02:09:14.937939 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:55268.service - OpenSSH per-connection server daemon (10.0.0.1:55268). Jan 28 02:09:16.171783 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 55268 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:16.193211 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:16.372119 systemd-logind[1534]: New session 1 of user core. Jan 28 02:09:16.379959 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 02:09:16.391010 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 02:09:16.625434 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 02:09:16.646376 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 02:09:16.774820 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 02:09:16.866373 systemd-logind[1534]: New session c1 of user core. Jan 28 02:09:18.165277 systemd[1666]: Queued start job for default target default.target. Jan 28 02:09:18.189743 systemd[1666]: Created slice app.slice - User Application Slice. Jan 28 02:09:18.189910 systemd[1666]: Reached target paths.target - Paths. Jan 28 02:09:18.190081 systemd[1666]: Reached target timers.target - Timers. Jan 28 02:09:18.213321 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 02:09:18.396167 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 02:09:18.403156 systemd[1666]: Reached target sockets.target - Sockets. Jan 28 02:09:18.403345 systemd[1666]: Reached target basic.target - Basic System. Jan 28 02:09:18.403414 systemd[1666]: Reached target default.target - Main User Target. Jan 28 02:09:18.403465 systemd[1666]: Startup finished in 1.339s. Jan 28 02:09:18.403812 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 02:09:18.426298 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 02:09:18.620148 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:52338.service - OpenSSH per-connection server daemon (10.0.0.1:52338). Jan 28 02:09:19.358880 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 52338 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:19.377824 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:19.478159 systemd-logind[1534]: New session 2 of user core. Jan 28 02:09:19.493270 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 02:09:19.768731 sshd[1681]: Connection closed by 10.0.0.1 port 52338 Jan 28 02:09:19.779988 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:19.806975 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:52352.service - OpenSSH per-connection server daemon (10.0.0.1:52352). Jan 28 02:09:19.808368 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:52338.service: Deactivated successfully. Jan 28 02:09:19.819036 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 02:09:19.836295 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Jan 28 02:09:19.860434 systemd-logind[1534]: Removed session 2. Jan 28 02:09:20.141949 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 52352 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:20.140675 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:20.265876 systemd-logind[1534]: New session 3 of user core. Jan 28 02:09:20.288690 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 02:09:20.424303 kubelet[1650]: E0128 02:09:20.422419 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:09:20.440153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:09:20.440487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:09:20.449433 systemd[1]: kubelet.service: Consumed 9.295s CPU time, 258.2M memory peak. Jan 28 02:09:20.491951 sshd[1691]: Connection closed by 10.0.0.1 port 52352 Jan 28 02:09:20.493035 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:20.516931 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:52352.service: Deactivated successfully. Jan 28 02:09:20.537104 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 02:09:20.549791 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Jan 28 02:09:20.565407 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:52362.service - OpenSSH per-connection server daemon (10.0.0.1:52362). Jan 28 02:09:20.589155 systemd-logind[1534]: Removed session 3. Jan 28 02:09:21.017441 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 52362 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:21.020031 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:21.092946 systemd-logind[1534]: New session 4 of user core. Jan 28 02:09:21.115419 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 02:09:21.414947 sshd[1701]: Connection closed by 10.0.0.1 port 52362 Jan 28 02:09:21.414930 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:21.438166 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:52362.service: Deactivated successfully. Jan 28 02:09:21.453260 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 02:09:21.462001 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Jan 28 02:09:21.465048 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:52374.service - OpenSSH per-connection server daemon (10.0.0.1:52374). Jan 28 02:09:21.487484 systemd-logind[1534]: Removed session 4. Jan 28 02:09:21.719125 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 52374 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:21.724983 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:21.755716 systemd-logind[1534]: New session 5 of user core. Jan 28 02:09:21.781940 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 02:09:22.340954 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 02:09:22.342213 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:09:22.443314 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 28 02:09:22.480961 sshd[1710]: Connection closed by 10.0.0.1 port 52374 Jan 28 02:09:22.477111 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:22.526833 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:52374.service: Deactivated successfully. Jan 28 02:09:22.538313 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 02:09:22.550184 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Jan 28 02:09:22.572773 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Jan 28 02:09:22.586809 systemd-logind[1534]: Removed session 5. Jan 28 02:09:23.014160 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:23.024103 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:23.097145 systemd-logind[1534]: New session 6 of user core. Jan 28 02:09:23.128125 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 02:09:23.299373 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 02:09:23.305922 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:09:23.369923 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 28 02:09:23.422851 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 02:09:23.423761 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:09:23.546424 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 02:09:24.049404 augenrules[1744]: No rules Jan 28 02:09:24.054167 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 02:09:24.054880 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 02:09:24.069902 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 28 02:09:24.079486 sshd[1720]: Connection closed by 10.0.0.1 port 52378 Jan 28 02:09:24.082047 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:24.108985 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:52378.service: Deactivated successfully. Jan 28 02:09:24.112873 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 02:09:24.126210 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Jan 28 02:09:24.138388 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:52382.service - OpenSSH per-connection server daemon (10.0.0.1:52382). Jan 28 02:09:24.142486 systemd-logind[1534]: Removed session 6. Jan 28 02:09:24.417863 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 52382 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:09:24.442319 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:24.496921 systemd-logind[1534]: New session 7 of user core. Jan 28 02:09:24.520248 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 02:09:24.682290 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 02:09:24.685023 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:09:28.015727 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 02:09:28.079907 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 02:09:28.875196 dockerd[1777]: time="2026-01-28T02:09:28.873449314Z" level=info msg="Starting up" Jan 28 02:09:28.879794 dockerd[1777]: time="2026-01-28T02:09:28.879311468Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 02:09:28.931381 dockerd[1777]: time="2026-01-28T02:09:28.931225207Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 02:09:29.019450 systemd[1]: var-lib-docker-metacopy\x2dcheck4047448821-merged.mount: Deactivated successfully. Jan 28 02:09:29.076372 dockerd[1777]: time="2026-01-28T02:09:29.076103709Z" level=info msg="Loading containers: start." Jan 28 02:09:29.134017 kernel: Initializing XFRM netlink socket Jan 28 02:09:30.512321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 02:09:30.521472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:09:31.542850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:09:31.577345 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:09:31.769995 systemd-networkd[1469]: docker0: Link UP Jan 28 02:09:31.818309 dockerd[1777]: time="2026-01-28T02:09:31.814429565Z" level=info msg="Loading containers: done." Jan 28 02:09:32.023472 dockerd[1777]: time="2026-01-28T02:09:32.023407439Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 02:09:32.024105 dockerd[1777]: time="2026-01-28T02:09:32.024077736Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 02:09:32.024311 dockerd[1777]: time="2026-01-28T02:09:32.024290511Z" level=info msg="Initializing buildkit" Jan 28 02:09:32.248881 kubelet[1940]: E0128 02:09:32.245377 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:09:32.268091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:09:32.268363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:09:32.274410 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 110.4M memory peak. Jan 28 02:09:32.412317 dockerd[1777]: time="2026-01-28T02:09:32.409895157Z" level=info msg="Completed buildkit initialization" Jan 28 02:09:32.446411 dockerd[1777]: time="2026-01-28T02:09:32.445151937Z" level=info msg="Daemon has completed initialization" Jan 28 02:09:32.446411 dockerd[1777]: time="2026-01-28T02:09:32.445317160Z" level=info msg="API listen on /run/docker.sock" Jan 28 02:09:32.450407 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 02:09:35.226114 containerd[1551]: time="2026-01-28T02:09:35.225954515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 02:09:36.132153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384091388.mount: Deactivated successfully. Jan 28 02:09:40.383081 containerd[1551]: time="2026-01-28T02:09:40.382875872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:40.385772 containerd[1551]: time="2026-01-28T02:09:40.385379546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 28 02:09:40.397310 containerd[1551]: time="2026-01-28T02:09:40.397148114Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:40.407845 containerd[1551]: time="2026-01-28T02:09:40.407205876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:40.408797 containerd[1551]: time="2026-01-28T02:09:40.408248259Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 5.181398395s" Jan 28 02:09:40.408797 containerd[1551]: time="2026-01-28T02:09:40.408393641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 28 02:09:40.410314 containerd[1551]: time="2026-01-28T02:09:40.409804299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 02:09:42.510218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 02:09:42.515984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:09:42.901996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:09:42.935325 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:09:43.143926 kubelet[2082]: E0128 02:09:43.142920 2082 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:09:43.149914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:09:43.150301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:09:43.151805 systemd[1]: kubelet.service: Consumed 522ms CPU time, 110.3M memory peak. Jan 28 02:09:43.881356 containerd[1551]: time="2026-01-28T02:09:43.880279673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:43.885820 containerd[1551]: time="2026-01-28T02:09:43.885345650Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 28 02:09:43.890395 containerd[1551]: time="2026-01-28T02:09:43.889831038Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:43.897686 containerd[1551]: time="2026-01-28T02:09:43.897294559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:43.898670 containerd[1551]: time="2026-01-28T02:09:43.898383913Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 3.488543047s" Jan 28 02:09:43.899269 containerd[1551]: time="2026-01-28T02:09:43.898976351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 28 02:09:43.900396 containerd[1551]: time="2026-01-28T02:09:43.900067155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 02:09:47.486840 containerd[1551]: time="2026-01-28T02:09:47.485712323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:47.489381 containerd[1551]: time="2026-01-28T02:09:47.489342566Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 28 02:09:47.496639 containerd[1551]: time="2026-01-28T02:09:47.493329327Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:47.506128 containerd[1551]: time="2026-01-28T02:09:47.505923228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:09:47.509428 containerd[1551]: time="2026-01-28T02:09:47.508986941Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 3.608880042s" Jan 28 02:09:47.509428 containerd[1551]: time="2026-01-28T02:09:47.509250527Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 28 02:09:47.549310 containerd[1551]: time="2026-01-28T02:09:47.548727509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 02:09:48.640477 update_engine[1541]: I20260128 02:09:48.639958 1541 update_attempter.cc:509] Updating boot flags... Jan 28 02:09:53.265448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 02:09:53.276382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:09:53.488310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140022162.mount: Deactivated successfully. Jan 28 02:09:54.482813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:09:54.511233 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:09:57.022772 kubelet[2127]: E0128 02:09:57.021365 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:09:57.035157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:09:57.036097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:09:57.037438 systemd[1]: kubelet.service: Consumed 3.167s CPU time, 112.5M memory peak. Jan 28 02:10:00.030930 containerd[1551]: time="2026-01-28T02:10:00.030170751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:00.032377 containerd[1551]: time="2026-01-28T02:10:00.031162811Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 28 02:10:00.034935 containerd[1551]: time="2026-01-28T02:10:00.034412850Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:00.044457 containerd[1551]: time="2026-01-28T02:10:00.044039208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:00.045458 containerd[1551]: time="2026-01-28T02:10:00.045200998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 12.496432059s" Jan 28 02:10:00.045458 containerd[1551]: time="2026-01-28T02:10:00.045439800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 28 02:10:00.054020 containerd[1551]: time="2026-01-28T02:10:00.053965065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 02:10:00.899831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007287355.mount: Deactivated successfully. Jan 28 02:10:04.900722 containerd[1551]: time="2026-01-28T02:10:04.899088072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:04.905005 containerd[1551]: time="2026-01-28T02:10:04.904862841Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 28 02:10:04.909005 containerd[1551]: time="2026-01-28T02:10:04.908746658Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:04.918828 containerd[1551]: time="2026-01-28T02:10:04.917447239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:04.919438 containerd[1551]: time="2026-01-28T02:10:04.919309399Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.865098069s" Jan 28 02:10:04.919438 containerd[1551]: time="2026-01-28T02:10:04.919335847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 28 02:10:04.926769 containerd[1551]: time="2026-01-28T02:10:04.926737082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 02:10:05.532058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874486740.mount: Deactivated successfully. Jan 28 02:10:05.556771 containerd[1551]: time="2026-01-28T02:10:05.556728967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:05.559795 containerd[1551]: time="2026-01-28T02:10:05.559744050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 28 02:10:05.564421 containerd[1551]: time="2026-01-28T02:10:05.564094728Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:05.571433 containerd[1551]: time="2026-01-28T02:10:05.571397720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:05.572702 containerd[1551]: time="2026-01-28T02:10:05.572675815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 645.817646ms" Jan 28 02:10:05.572808 containerd[1551]: time="2026-01-28T02:10:05.572791081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 28 02:10:05.575951 containerd[1551]: time="2026-01-28T02:10:05.575360655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 02:10:06.268467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829014716.mount: Deactivated successfully. Jan 28 02:10:07.438236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 02:10:07.537062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:10:08.607083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:10:08.622305 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:10:09.465227 kubelet[2248]: E0128 02:10:09.465015 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:10:09.472103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:10:09.472446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:10:09.473910 systemd[1]: kubelet.service: Consumed 1.629s CPU time, 110.4M memory peak. Jan 28 02:10:13.301876 containerd[1551]: time="2026-01-28T02:10:13.301730429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:13.303188 containerd[1551]: time="2026-01-28T02:10:13.303006150Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 28 02:10:13.305756 containerd[1551]: time="2026-01-28T02:10:13.305723870Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:13.310214 containerd[1551]: time="2026-01-28T02:10:13.310032577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:10:13.311477 containerd[1551]: time="2026-01-28T02:10:13.311209905Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.73581691s" Jan 28 02:10:13.311477 containerd[1551]: time="2026-01-28T02:10:13.311388774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 28 02:10:19.549015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 02:10:19.586399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:10:19.625339 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 02:10:19.625749 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 02:10:19.626464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:10:19.636967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:10:19.908433 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Jan 28 02:10:19.908757 systemd[1]: Reloading... Jan 28 02:10:20.297767 zram_generator::config[2344]: No configuration found. Jan 28 02:10:20.625893 systemd[1]: Reloading finished in 716 ms. Jan 28 02:10:20.801340 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 02:10:20.801892 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 02:10:20.802666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:10:20.802983 systemd[1]: kubelet.service: Consumed 566ms CPU time, 98.2M memory peak. Jan 28 02:10:20.808689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:10:21.204063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:10:21.228242 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:10:21.609609 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:10:21.609609 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:10:21.610225 kubelet[2387]: I0128 02:10:21.609812 2387 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:10:22.034318 kubelet[2387]: I0128 02:10:22.033950 2387 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 02:10:22.034318 kubelet[2387]: I0128 02:10:22.034045 2387 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:10:22.034318 kubelet[2387]: I0128 02:10:22.034083 2387 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 02:10:22.034318 kubelet[2387]: I0128 02:10:22.034091 2387 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:10:22.034818 kubelet[2387]: I0128 02:10:22.034340 2387 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 02:10:22.114466 kubelet[2387]: I0128 02:10:22.114265 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:10:22.117659 kubelet[2387]: E0128 02:10:22.116246 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 02:10:22.128657 kubelet[2387]: I0128 02:10:22.128134 2387 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 02:10:22.576406 kubelet[2387]: I0128 02:10:22.576053 2387 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 02:10:22.578663 kubelet[2387]: I0128 02:10:22.577908 2387 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:10:22.580109 kubelet[2387]: I0128 02:10:22.578663 2387 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 02:10:22.580381 kubelet[2387]: I0128 02:10:22.580166 2387 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:10:22.580381 kubelet[2387]: I0128 02:10:22.580186 2387 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 02:10:22.580462 kubelet[2387]: I0128 02:10:22.580404 2387 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 02:10:22.594374 kubelet[2387]: I0128 02:10:22.594166 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:10:22.595616 kubelet[2387]: I0128 02:10:22.595445 2387 kubelet.go:475] "Attempting to sync node with API server" Jan 28 02:10:22.595791 kubelet[2387]: I0128 02:10:22.595749 2387 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:10:22.596000 kubelet[2387]: I0128 02:10:22.595854 2387 kubelet.go:387] "Adding apiserver pod source" Jan 28 02:10:22.598398 kubelet[2387]: I0128 02:10:22.598368 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:10:22.599211 kubelet[2387]: E0128 02:10:22.598870 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 02:10:22.601137 kubelet[2387]: E0128 02:10:22.600439 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 02:10:22.610008 kubelet[2387]: I0128 02:10:22.608123 2387 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 02:10:22.611973 kubelet[2387]: I0128 02:10:22.611375 2387 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 02:10:22.611973 kubelet[2387]: I0128 02:10:22.611747 2387 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 02:10:22.612076 kubelet[2387]: W0128 02:10:22.611978 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 02:10:22.629295 kubelet[2387]: I0128 02:10:22.628834 2387 server.go:1262] "Started kubelet" Jan 28 02:10:22.631058 kubelet[2387]: I0128 02:10:22.629263 2387 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:10:22.631279 kubelet[2387]: I0128 02:10:22.631256 2387 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 02:10:22.633149 kubelet[2387]: I0128 02:10:22.632448 2387 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:10:22.638820 kubelet[2387]: I0128 02:10:22.636258 2387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:10:22.638820 kubelet[2387]: I0128 02:10:22.636323 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:10:22.643365 kubelet[2387]: I0128 02:10:22.643132 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:10:22.646855 kubelet[2387]: I0128 02:10:22.646837 2387 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 02:10:22.648488 kubelet[2387]: E0128 02:10:22.648067 2387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 02:10:22.651457 kubelet[2387]: I0128 02:10:22.651146 2387 server.go:310] "Adding debug handlers to kubelet server" Jan 28 02:10:22.659875 kubelet[2387]: I0128 02:10:22.659848 2387 factory.go:223] Registration of the systemd container factory successfully Jan 28 02:10:22.660233 kubelet[2387]: I0128 02:10:22.660213 2387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:10:22.677761 kubelet[2387]: I0128 02:10:22.676788 2387 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 02:10:22.677761 kubelet[2387]: I0128 02:10:22.676912 2387 reconciler.go:29] "Reconciler: start to sync state" Jan 28 02:10:22.677947 kubelet[2387]: E0128 02:10:22.677477 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 02:10:22.678204 kubelet[2387]: E0128 02:10:22.678170 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Jan 28 02:10:22.680180 kubelet[2387]: I0128 02:10:22.680055 2387 factory.go:223] Registration of the containerd container factory successfully Jan 28 02:10:22.685747 kubelet[2387]: E0128 02:10:22.684238 2387 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 02:10:22.688880 kubelet[2387]: E0128 02:10:22.677374 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec31a9a4c839d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 02:10:22.628455325 +0000 UTC m=+1.243143269,LastTimestamp:2026-01-28 02:10:22.628455325 +0000 UTC m=+1.243143269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 02:10:22.716826 kubelet[2387]: I0128 02:10:22.716794 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:10:22.717016 kubelet[2387]: I0128 02:10:22.717002 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:10:22.717097 kubelet[2387]: I0128 02:10:22.717085 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:10:22.725833 kubelet[2387]: I0128 02:10:22.725806 2387 policy_none.go:49] "None policy: Start" Jan 28 02:10:22.726083 kubelet[2387]: I0128 02:10:22.726060 2387 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 02:10:22.726184 kubelet[2387]: I0128 02:10:22.726167 2387 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 02:10:22.732351 kubelet[2387]: I0128 02:10:22.732331 2387 policy_none.go:47] "Start" Jan 28 02:10:22.757386 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 02:10:22.759043 kubelet[2387]: E0128 02:10:22.758337 2387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 02:10:22.789369 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 02:10:22.792361 kubelet[2387]: I0128 02:10:22.792118 2387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 02:10:22.797909 kubelet[2387]: I0128 02:10:22.797879 2387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 02:10:22.798132 kubelet[2387]: I0128 02:10:22.798115 2387 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 02:10:22.798388 kubelet[2387]: I0128 02:10:22.798373 2387 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 02:10:22.798799 kubelet[2387]: E0128 02:10:22.798773 2387 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 02:10:22.800104 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 02:10:22.800861 kubelet[2387]: E0128 02:10:22.800442 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 02:10:22.813429 kubelet[2387]: E0128 02:10:22.813188 2387 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 02:10:22.814099 kubelet[2387]: I0128 02:10:22.814086 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:10:22.815828 kubelet[2387]: I0128 02:10:22.815364 2387 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:10:22.817229 kubelet[2387]: I0128 02:10:22.817206 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:10:22.818735 kubelet[2387]: E0128 02:10:22.818182 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:10:22.818735 kubelet[2387]: E0128 02:10:22.818375 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 02:10:22.880064 kubelet[2387]: E0128 02:10:22.879915 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Jan 28 02:10:22.925735 kubelet[2387]: I0128 02:10:22.923739 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:22.925735 kubelet[2387]: E0128 02:10:22.924420 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 28 02:10:22.981026 kubelet[2387]: I0128 02:10:22.980755 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2957bf6a174e7ec90111fe9191464798-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2957bf6a174e7ec90111fe9191464798\") " pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:22.981026 kubelet[2387]: I0128 02:10:22.981016 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:22.981453 kubelet[2387]: I0128 02:10:22.981050 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2957bf6a174e7ec90111fe9191464798-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2957bf6a174e7ec90111fe9191464798\") " pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:22.981453 kubelet[2387]: I0128 02:10:22.981118 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2957bf6a174e7ec90111fe9191464798-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2957bf6a174e7ec90111fe9191464798\") " pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:22.981453 kubelet[2387]: I0128 02:10:22.981144 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:22.981453 kubelet[2387]: I0128 02:10:22.981177 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:22.981453 kubelet[2387]: I0128 02:10:22.981295 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:22.981890 kubelet[2387]: I0128 02:10:22.981319 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:22.981890 kubelet[2387]: I0128 02:10:22.981344 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:23.033183 systemd[1]: Created slice kubepods-burstable-pod2957bf6a174e7ec90111fe9191464798.slice - libcontainer container kubepods-burstable-pod2957bf6a174e7ec90111fe9191464798.slice. Jan 28 02:10:23.087387 kubelet[2387]: E0128 02:10:23.085483 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:23.103734 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 28 02:10:23.115723 containerd[1551]: time="2026-01-28T02:10:23.115370382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2957bf6a174e7ec90111fe9191464798,Namespace:kube-system,Attempt:0,}" Jan 28 02:10:23.155256 kubelet[2387]: I0128 02:10:23.154034 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:23.155256 kubelet[2387]: E0128 02:10:23.155197 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:23.157282 kubelet[2387]: E0128 02:10:23.157245 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 28 02:10:23.183178 containerd[1551]: time="2026-01-28T02:10:23.183023571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 28 02:10:23.186417 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 28 02:10:23.207943 kubelet[2387]: E0128 02:10:23.207732 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:23.222095 containerd[1551]: time="2026-01-28T02:10:23.221355981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 28 02:10:23.281925 kubelet[2387]: E0128 02:10:23.281349 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Jan 28 02:10:23.575749 kubelet[2387]: I0128 02:10:23.575213 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:23.576748 kubelet[2387]: E0128 02:10:23.576359 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 28 02:10:23.728025 kubelet[2387]: E0128 02:10:23.727843 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 02:10:23.830283 kubelet[2387]: E0128 02:10:23.828932 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec31a9a4c839d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 02:10:22.628455325 +0000 UTC m=+1.243143269,LastTimestamp:2026-01-28 02:10:22.628455325 +0000 UTC m=+1.243143269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 02:10:24.086110 kubelet[2387]: E0128 02:10:24.083294 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Jan 28 02:10:24.089934 kubelet[2387]: E0128 02:10:24.089216 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 02:10:24.108446 kubelet[2387]: E0128 02:10:24.107888 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 02:10:24.114096 kubelet[2387]: E0128 02:10:24.113456 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 02:10:24.193392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773555204.mount: Deactivated successfully. Jan 28 02:10:24.218966 containerd[1551]: time="2026-01-28T02:10:24.218197400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:10:24.226274 kubelet[2387]: E0128 02:10:24.226155 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 02:10:24.236989 containerd[1551]: time="2026-01-28T02:10:24.236684284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 02:10:24.258141 containerd[1551]: time="2026-01-28T02:10:24.257423140Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:10:24.266850 containerd[1551]: time="2026-01-28T02:10:24.266371944Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:10:24.282815 containerd[1551]: time="2026-01-28T02:10:24.282042098Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:10:24.289397 containerd[1551]: time="2026-01-28T02:10:24.289135692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 02:10:24.293443 containerd[1551]: time="2026-01-28T02:10:24.293217568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 02:10:24.303946 containerd[1551]: time="2026-01-28T02:10:24.302710575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:10:24.312144 containerd[1551]: time="2026-01-28T02:10:24.311898635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.177912408s" Jan 28 02:10:24.322341 containerd[1551]: time="2026-01-28T02:10:24.321216332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.084013461s" Jan 28 02:10:24.343208 containerd[1551]: time="2026-01-28T02:10:24.340445547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.14352555s" Jan 28 02:10:24.387172 kubelet[2387]: I0128 02:10:24.386749 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:24.388999 kubelet[2387]: E0128 02:10:24.387959 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 28 02:10:24.938098 containerd[1551]: time="2026-01-28T02:10:24.938041953Z" level=info msg="connecting to shim ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b" address="unix:///run/containerd/s/cd542ba8a761915719753635daecb04c16d9c4081a3b9a852df8980c954c1c22" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:10:24.975104 containerd[1551]: time="2026-01-28T02:10:24.974326154Z" level=info msg="connecting to shim b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26" address="unix:///run/containerd/s/df889df68465985396832ec13a9ea6324302e902789643b48d16add92059b8d1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:10:25.345058 containerd[1551]: time="2026-01-28T02:10:25.344867937Z" level=info msg="connecting to shim e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30" address="unix:///run/containerd/s/6dfe24d554901a7fa012eee7a17307817ac8932eeab2c87bad756a52fb43a162" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:10:25.577218 systemd[1]: Started cri-containerd-e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30.scope - libcontainer container e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30. Jan 28 02:10:25.964001 systemd[1]: Started cri-containerd-b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26.scope - libcontainer container b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26. Jan 28 02:10:25.984773 kubelet[2387]: E0128 02:10:25.982931 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="3.2s" Jan 28 02:10:25.987708 kubelet[2387]: E0128 02:10:25.985846 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 02:10:25.994108 kubelet[2387]: I0128 02:10:25.994014 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:25.994933 kubelet[2387]: E0128 02:10:25.994474 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 28 02:10:26.116011 systemd[1]: Started cri-containerd-ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b.scope - libcontainer container ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b. Jan 28 02:10:26.843297 kubelet[2387]: E0128 02:10:26.843205 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 02:10:26.850803 containerd[1551]: time="2026-01-28T02:10:26.850431679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2957bf6a174e7ec90111fe9191464798,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26\"" Jan 28 02:10:26.864249 containerd[1551]: time="2026-01-28T02:10:26.864151487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30\"" Jan 28 02:10:26.877998 containerd[1551]: time="2026-01-28T02:10:26.877962613Z" level=info msg="CreateContainer within sandbox \"b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 02:10:26.884007 containerd[1551]: time="2026-01-28T02:10:26.883972434Z" level=info msg="CreateContainer within sandbox \"e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 02:10:26.925733 containerd[1551]: time="2026-01-28T02:10:26.924287908Z" level=info msg="Container 1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:10:26.929774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116058674.mount: Deactivated successfully. Jan 28 02:10:26.940721 kubelet[2387]: E0128 02:10:26.936889 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 02:10:26.940721 kubelet[2387]: E0128 02:10:26.937008 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 02:10:26.986676 containerd[1551]: time="2026-01-28T02:10:26.985103682Z" level=info msg="Container ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:10:26.989393 containerd[1551]: time="2026-01-28T02:10:26.989207200Z" level=info msg="CreateContainer within sandbox \"b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c\"" Jan 28 02:10:27.014778 containerd[1551]: time="2026-01-28T02:10:27.014437855Z" level=info msg="StartContainer for \"1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c\"" Jan 28 02:10:27.042090 containerd[1551]: time="2026-01-28T02:10:27.041921518Z" level=info msg="connecting to shim 1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c" address="unix:///run/containerd/s/df889df68465985396832ec13a9ea6324302e902789643b48d16add92059b8d1" protocol=ttrpc version=3 Jan 28 02:10:27.121998 containerd[1551]: time="2026-01-28T02:10:27.120094184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b\"" Jan 28 02:10:27.177824 containerd[1551]: time="2026-01-28T02:10:27.177719844Z" level=info msg="CreateContainer within sandbox \"e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402\"" Jan 28 02:10:27.191942 containerd[1551]: time="2026-01-28T02:10:27.190103742Z" level=info msg="StartContainer for \"ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402\"" Jan 28 02:10:27.212707 containerd[1551]: time="2026-01-28T02:10:27.211404636Z" level=info msg="connecting to shim ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402" address="unix:///run/containerd/s/6dfe24d554901a7fa012eee7a17307817ac8932eeab2c87bad756a52fb43a162" protocol=ttrpc version=3 Jan 28 02:10:27.339353 containerd[1551]: time="2026-01-28T02:10:27.339292665Z" level=info msg="CreateContainer within sandbox \"ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 02:10:27.399874 containerd[1551]: time="2026-01-28T02:10:27.399354275Z" level=info msg="Container cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:10:27.413376 systemd[1]: Started cri-containerd-1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c.scope - libcontainer container 1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c. Jan 28 02:10:27.422811 containerd[1551]: time="2026-01-28T02:10:27.422124307Z" level=info msg="CreateContainer within sandbox \"ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295\"" Jan 28 02:10:27.423063 containerd[1551]: time="2026-01-28T02:10:27.422949912Z" level=info msg="StartContainer for \"cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295\"" Jan 28 02:10:27.425288 containerd[1551]: time="2026-01-28T02:10:27.425123661Z" level=info msg="connecting to shim cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295" address="unix:///run/containerd/s/cd542ba8a761915719753635daecb04c16d9c4081a3b9a852df8980c954c1c22" protocol=ttrpc version=3 Jan 28 02:10:27.446887 systemd[1]: Started cri-containerd-ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402.scope - libcontainer container ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402. Jan 28 02:10:27.500905 systemd[1]: Started cri-containerd-cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295.scope - libcontainer container cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295. Jan 28 02:10:27.871951 containerd[1551]: time="2026-01-28T02:10:27.871906820Z" level=info msg="StartContainer for \"cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295\" returns successfully" Jan 28 02:10:27.876847 containerd[1551]: time="2026-01-28T02:10:27.876729919Z" level=info msg="StartContainer for \"1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c\" returns successfully" Jan 28 02:10:27.927769 containerd[1551]: time="2026-01-28T02:10:27.926691210Z" level=info msg="StartContainer for \"ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402\" returns successfully" Jan 28 02:10:28.274357 kubelet[2387]: E0128 02:10:28.272127 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:28.295983 kubelet[2387]: E0128 02:10:28.295849 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:28.347660 kubelet[2387]: E0128 02:10:28.346807 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:29.207170 kubelet[2387]: I0128 02:10:29.207032 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:29.394742 kubelet[2387]: E0128 02:10:29.394247 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:29.398877 kubelet[2387]: E0128 02:10:29.398020 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:29.401083 kubelet[2387]: E0128 02:10:29.400976 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:30.387423 kubelet[2387]: E0128 02:10:30.387105 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:30.388598 kubelet[2387]: E0128 02:10:30.388194 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:31.902943 kubelet[2387]: E0128 02:10:31.902890 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:32.819714 kubelet[2387]: E0128 02:10:32.819374 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 02:10:33.310303 kubelet[2387]: E0128 02:10:33.310101 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 02:10:36.120955 kubelet[2387]: E0128 02:10:36.120899 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 02:10:36.194817 kubelet[2387]: I0128 02:10:36.194775 2387 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 02:10:36.195632 kubelet[2387]: E0128 02:10:36.195421 2387 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 02:10:36.263763 kubelet[2387]: E0128 02:10:36.263304 2387 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec31a9a4c839d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 02:10:22.628455325 +0000 UTC m=+1.243143269,LastTimestamp:2026-01-28 02:10:22.628455325 +0000 UTC m=+1.243143269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 02:10:36.284769 kubelet[2387]: I0128 02:10:36.284344 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:36.703718 kubelet[2387]: E0128 02:10:36.703176 2387 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec31a9d9f4e76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 02:10:22.684212854 +0000 UTC m=+1.298900819,LastTimestamp:2026-01-28 02:10:22.684212854 +0000 UTC m=+1.298900819,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 02:10:36.716998 kubelet[2387]: E0128 02:10:36.716701 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:36.716998 kubelet[2387]: I0128 02:10:36.716744 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:36.723012 kubelet[2387]: E0128 02:10:36.722667 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:36.723012 kubelet[2387]: I0128 02:10:36.722714 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:36.732250 kubelet[2387]: E0128 02:10:36.732052 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:37.051457 kubelet[2387]: I0128 02:10:37.050387 2387 apiserver.go:52] "Watching apiserver" Jan 28 02:10:37.077125 kubelet[2387]: I0128 02:10:37.077004 2387 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 02:10:39.141632 kubelet[2387]: I0128 02:10:39.141127 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:41.826232 systemd[1]: Reload requested from client PID 2679 ('systemctl') (unit session-7.scope)... Jan 28 02:10:41.826336 systemd[1]: Reloading... Jan 28 02:10:41.912995 kubelet[2387]: I0128 02:10:41.912788 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:42.038832 zram_generator::config[2722]: No configuration found. Jan 28 02:10:42.518259 systemd[1]: Reloading finished in 691 ms. Jan 28 02:10:42.603119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:10:42.636763 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 02:10:42.637440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:10:42.637679 systemd[1]: kubelet.service: Consumed 5.643s CPU time, 129.4M memory peak. Jan 28 02:10:42.642101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:10:42.996356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:10:43.020426 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:10:43.248990 kubelet[2766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:10:43.248990 kubelet[2766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:10:43.249412 kubelet[2766]: I0128 02:10:43.249147 2766 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:10:43.276341 kubelet[2766]: I0128 02:10:43.276199 2766 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 02:10:43.276341 kubelet[2766]: I0128 02:10:43.276296 2766 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:10:43.276341 kubelet[2766]: I0128 02:10:43.276323 2766 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 02:10:43.276341 kubelet[2766]: I0128 02:10:43.276330 2766 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:10:43.276847 kubelet[2766]: I0128 02:10:43.276740 2766 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 02:10:43.278954 kubelet[2766]: I0128 02:10:43.278799 2766 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 02:10:43.287113 kubelet[2766]: I0128 02:10:43.287093 2766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:10:43.307140 sudo[2783]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 02:10:43.308401 sudo[2783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 02:10:43.313120 kubelet[2766]: I0128 02:10:43.313066 2766 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 02:10:43.354649 kubelet[2766]: I0128 02:10:43.354132 2766 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 02:10:43.356739 kubelet[2766]: I0128 02:10:43.356456 2766 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:10:43.357181 kubelet[2766]: I0128 02:10:43.356970 2766 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 02:10:43.357695 kubelet[2766]: I0128 02:10:43.357676 2766 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:10:43.357793 kubelet[2766]: I0128 02:10:43.357779 2766 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 02:10:43.358021 kubelet[2766]: I0128 02:10:43.358001 2766 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 02:10:43.359204 kubelet[2766]: I0128 02:10:43.359155 2766 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:10:43.359446 kubelet[2766]: I0128 02:10:43.359430 2766 kubelet.go:475] "Attempting to sync node with API server" Jan 28 02:10:43.359957 kubelet[2766]: I0128 02:10:43.359689 2766 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:10:43.359957 kubelet[2766]: I0128 02:10:43.359738 2766 kubelet.go:387] "Adding apiserver pod source" Jan 28 02:10:43.359957 kubelet[2766]: I0128 02:10:43.359778 2766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:10:43.364759 kubelet[2766]: I0128 02:10:43.364742 2766 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 02:10:43.365457 kubelet[2766]: I0128 02:10:43.365439 2766 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 02:10:43.365704 kubelet[2766]: I0128 02:10:43.365692 2766 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 02:10:43.373808 kubelet[2766]: I0128 02:10:43.373788 2766 server.go:1262] "Started kubelet" Jan 28 02:10:43.377394 kubelet[2766]: I0128 02:10:43.377373 2766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:10:43.403682 kubelet[2766]: I0128 02:10:43.403143 2766 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:10:43.405270 kubelet[2766]: I0128 02:10:43.405244 2766 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 02:10:43.406152 kubelet[2766]: I0128 02:10:43.406133 2766 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:10:43.408791 kubelet[2766]: I0128 02:10:43.408765 2766 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:10:43.410099 kubelet[2766]: I0128 02:10:43.410083 2766 server.go:310] "Adding debug handlers to kubelet server" Jan 28 02:10:43.416328 kubelet[2766]: I0128 02:10:43.416212 2766 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 02:10:43.417218 kubelet[2766]: I0128 02:10:43.417195 2766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 02:10:43.418968 kubelet[2766]: I0128 02:10:43.417354 2766 reconciler.go:29] "Reconciler: start to sync state" Jan 28 02:10:43.418968 kubelet[2766]: I0128 02:10:43.418328 2766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:10:43.425055 kubelet[2766]: E0128 02:10:43.421954 2766 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 02:10:43.433059 kubelet[2766]: I0128 02:10:43.433031 2766 factory.go:223] Registration of the systemd container factory successfully Jan 28 02:10:43.436139 kubelet[2766]: I0128 02:10:43.436109 2766 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:10:43.450251 kubelet[2766]: I0128 02:10:43.450224 2766 factory.go:223] Registration of the containerd container factory successfully Jan 28 02:10:43.567223 kubelet[2766]: I0128 02:10:43.566996 2766 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 02:10:43.577822 kubelet[2766]: I0128 02:10:43.577289 2766 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 02:10:43.578021 kubelet[2766]: I0128 02:10:43.577963 2766 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 02:10:43.580439 kubelet[2766]: I0128 02:10:43.580182 2766 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 02:10:43.580439 kubelet[2766]: E0128 02:10:43.580247 2766 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.668690 2766 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.668795 2766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.668824 2766 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.669088 2766 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.669099 2766 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.669122 2766 policy_none.go:49] "None policy: Start" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.669136 2766 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 02:10:43.669227 kubelet[2766]: I0128 02:10:43.669149 2766 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 02:10:43.669767 kubelet[2766]: I0128 02:10:43.669277 2766 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 02:10:43.669767 kubelet[2766]: I0128 02:10:43.669290 2766 policy_none.go:47] "Start" Jan 28 02:10:43.680977 kubelet[2766]: E0128 02:10:43.680839 2766 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 02:10:43.691326 kubelet[2766]: E0128 02:10:43.691299 2766 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 02:10:43.692997 kubelet[2766]: I0128 02:10:43.692957 2766 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:10:43.693720 kubelet[2766]: I0128 02:10:43.693676 2766 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:10:43.695483 kubelet[2766]: I0128 02:10:43.695360 2766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:10:43.703807 kubelet[2766]: E0128 02:10:43.703465 2766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:10:43.832672 kubelet[2766]: I0128 02:10:43.831395 2766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 02:10:43.886652 kubelet[2766]: I0128 02:10:43.886067 2766 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 02:10:43.886652 kubelet[2766]: I0128 02:10:43.886172 2766 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 02:10:43.912390 kubelet[2766]: I0128 02:10:43.909392 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:43.916220 kubelet[2766]: I0128 02:10:43.911826 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:43.919126 kubelet[2766]: I0128 02:10:43.918124 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:43.944687 kubelet[2766]: I0128 02:10:43.942209 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2957bf6a174e7ec90111fe9191464798-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2957bf6a174e7ec90111fe9191464798\") " pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:43.944687 kubelet[2766]: I0128 02:10:43.942272 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2957bf6a174e7ec90111fe9191464798-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2957bf6a174e7ec90111fe9191464798\") " pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:43.944687 kubelet[2766]: I0128 02:10:43.942295 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2957bf6a174e7ec90111fe9191464798-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2957bf6a174e7ec90111fe9191464798\") " pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:43.944687 kubelet[2766]: I0128 02:10:43.942420 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:43.944687 kubelet[2766]: I0128 02:10:43.942439 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:43.945080 kubelet[2766]: I0128 02:10:43.942452 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:43.976753 kubelet[2766]: E0128 02:10:43.975004 2766 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:43.977089 kubelet[2766]: E0128 02:10:43.976994 2766 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:43.986466 sudo[2783]: pam_unix(sudo:session): session closed for user root Jan 28 02:10:44.042767 kubelet[2766]: I0128 02:10:44.042728 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 28 02:10:44.043429 kubelet[2766]: I0128 02:10:44.043404 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:44.044647 kubelet[2766]: I0128 02:10:44.044621 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 02:10:44.360681 kubelet[2766]: I0128 02:10:44.360481 2766 apiserver.go:52] "Watching apiserver" Jan 28 02:10:44.417424 kubelet[2766]: I0128 02:10:44.417367 2766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 02:10:44.594731 kubelet[2766]: I0128 02:10:44.594246 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.594209135 podStartE2EDuration="1.594209135s" podCreationTimestamp="2026-01-28 02:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:10:44.594038552 +0000 UTC m=+1.558375430" watchObservedRunningTime="2026-01-28 02:10:44.594209135 +0000 UTC m=+1.558546012" Jan 28 02:10:44.631732 kubelet[2766]: I0128 02:10:44.631378 2766 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:44.665758 kubelet[2766]: E0128 02:10:44.665405 2766 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 02:10:44.691692 kubelet[2766]: I0128 02:10:44.691416 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.691397486 podStartE2EDuration="3.691397486s" podCreationTimestamp="2026-01-28 02:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:10:44.663131109 +0000 UTC m=+1.627467986" watchObservedRunningTime="2026-01-28 02:10:44.691397486 +0000 UTC m=+1.655734363" Jan 28 02:10:46.437159 kubelet[2766]: I0128 02:10:46.437019 2766 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 02:10:46.445063 containerd[1551]: time="2026-01-28T02:10:46.444963771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 02:10:46.449134 kubelet[2766]: I0128 02:10:46.448960 2766 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 02:10:46.697230 sudo[1757]: pam_unix(sudo:session): session closed for user root Jan 28 02:10:46.700722 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 28 02:10:46.702440 sshd[1756]: Connection closed by 10.0.0.1 port 52382 Jan 28 02:10:46.708702 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Jan 28 02:10:46.709432 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:52382.service: Deactivated successfully. Jan 28 02:10:46.715267 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 02:10:46.715938 systemd[1]: session-7.scope: Consumed 12.740s CPU time, 264.9M memory peak. Jan 28 02:10:46.722267 systemd-logind[1534]: Removed session 7. Jan 28 02:10:47.284340 systemd[1]: Created slice kubepods-besteffort-pod0cd404b0_e738_4f9e_9196_b602ed6a6c03.slice - libcontainer container kubepods-besteffort-pod0cd404b0_e738_4f9e_9196_b602ed6a6c03.slice. Jan 28 02:10:47.287862 kubelet[2766]: I0128 02:10:47.287324 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cd404b0-e738-4f9e-9196-b602ed6a6c03-kube-proxy\") pod \"kube-proxy-mmghq\" (UID: \"0cd404b0-e738-4f9e-9196-b602ed6a6c03\") " pod="kube-system/kube-proxy-mmghq" Jan 28 02:10:47.287862 kubelet[2766]: I0128 02:10:47.287454 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd404b0-e738-4f9e-9196-b602ed6a6c03-xtables-lock\") pod \"kube-proxy-mmghq\" (UID: \"0cd404b0-e738-4f9e-9196-b602ed6a6c03\") " pod="kube-system/kube-proxy-mmghq" Jan 28 02:10:47.287862 kubelet[2766]: I0128 02:10:47.287477 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd404b0-e738-4f9e-9196-b602ed6a6c03-lib-modules\") pod \"kube-proxy-mmghq\" (UID: \"0cd404b0-e738-4f9e-9196-b602ed6a6c03\") " pod="kube-system/kube-proxy-mmghq" Jan 28 02:10:47.287862 kubelet[2766]: I0128 02:10:47.287731 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2gkr\" (UniqueName: \"kubernetes.io/projected/0cd404b0-e738-4f9e-9196-b602ed6a6c03-kube-api-access-x2gkr\") pod \"kube-proxy-mmghq\" (UID: \"0cd404b0-e738-4f9e-9196-b602ed6a6c03\") " pod="kube-system/kube-proxy-mmghq" Jan 28 02:10:47.314142 systemd[1]: Created slice kubepods-burstable-pod59f2b9f5_b2f7_45c3_8a8d_eda832ce45e1.slice - libcontainer container kubepods-burstable-pod59f2b9f5_b2f7_45c3_8a8d_eda832ce45e1.slice. Jan 28 02:10:47.389697 kubelet[2766]: I0128 02:10:47.388235 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-bpf-maps\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.389697 kubelet[2766]: I0128 02:10:47.388286 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hostproc\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.389697 kubelet[2766]: I0128 02:10:47.388311 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-cgroup\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.389697 kubelet[2766]: I0128 02:10:47.388334 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-lib-modules\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.389697 kubelet[2766]: I0128 02:10:47.388358 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-xtables-lock\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.389697 kubelet[2766]: I0128 02:10:47.388381 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-clustermesh-secrets\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390126 kubelet[2766]: I0128 02:10:47.388404 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-kernel\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390126 kubelet[2766]: I0128 02:10:47.388913 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hubble-tls\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390126 kubelet[2766]: I0128 02:10:47.388957 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cni-path\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390126 kubelet[2766]: I0128 02:10:47.388979 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-etc-cni-netd\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390126 kubelet[2766]: I0128 02:10:47.389004 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-config-path\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390126 kubelet[2766]: I0128 02:10:47.389028 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-net\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390306 kubelet[2766]: I0128 02:10:47.389052 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvrc\" (UniqueName: \"kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-kube-api-access-dqvrc\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.390306 kubelet[2766]: I0128 02:10:47.389089 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-run\") pod \"cilium-s2lth\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " pod="kube-system/cilium-s2lth" Jan 28 02:10:47.621338 containerd[1551]: time="2026-01-28T02:10:47.621038872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmghq,Uid:0cd404b0-e738-4f9e-9196-b602ed6a6c03,Namespace:kube-system,Attempt:0,}" Jan 28 02:10:47.643705 systemd[1]: Created slice kubepods-besteffort-pod072dec10_02b3_4f7a_b4fa_aabda3ec5bf6.slice - libcontainer container kubepods-besteffort-pod072dec10_02b3_4f7a_b4fa_aabda3ec5bf6.slice. Jan 28 02:10:47.650666 containerd[1551]: time="2026-01-28T02:10:47.649264426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2lth,Uid:59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1,Namespace:kube-system,Attempt:0,}" Jan 28 02:10:47.696156 kubelet[2766]: I0128 02:10:47.695460 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6dxh\" (UniqueName: \"kubernetes.io/projected/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-kube-api-access-n6dxh\") pod \"cilium-operator-6f9c7c5859-nfw7c\" (UID: \"072dec10-02b3-4f7a-b4fa-aabda3ec5bf6\") " pod="kube-system/cilium-operator-6f9c7c5859-nfw7c" Jan 28 02:10:47.696156 kubelet[2766]: I0128 02:10:47.695863 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-nfw7c\" (UID: \"072dec10-02b3-4f7a-b4fa-aabda3ec5bf6\") " pod="kube-system/cilium-operator-6f9c7c5859-nfw7c" Jan 28 02:10:47.746704 containerd[1551]: time="2026-01-28T02:10:47.745913840Z" level=info msg="connecting to shim d8188091ceaa9a3edd2779e52f1bacdf6d14a2002695851a8fe6f546a9d7ac43" address="unix:///run/containerd/s/41bbdac33dfa732e23ae40edafd03124bad209a509f3a669024e993ef969e234" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:10:47.785964 containerd[1551]: time="2026-01-28T02:10:47.784964137Z" level=info msg="connecting to shim c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3" address="unix:///run/containerd/s/40d5b6eb1b6d52a6a39e5cff0eb06fa2775fdbe66757a9ac0acb9a15f614404b" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:10:47.873170 systemd[1]: Started cri-containerd-d8188091ceaa9a3edd2779e52f1bacdf6d14a2002695851a8fe6f546a9d7ac43.scope - libcontainer container d8188091ceaa9a3edd2779e52f1bacdf6d14a2002695851a8fe6f546a9d7ac43. Jan 28 02:10:47.924701 systemd[1]: Started cri-containerd-c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3.scope - libcontainer container c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3. Jan 28 02:10:47.965409 containerd[1551]: time="2026-01-28T02:10:47.965210245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-nfw7c,Uid:072dec10-02b3-4f7a-b4fa-aabda3ec5bf6,Namespace:kube-system,Attempt:0,}" Jan 28 02:10:48.006168 containerd[1551]: time="2026-01-28T02:10:48.006059528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmghq,Uid:0cd404b0-e738-4f9e-9196-b602ed6a6c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8188091ceaa9a3edd2779e52f1bacdf6d14a2002695851a8fe6f546a9d7ac43\"" Jan 28 02:10:48.055303 containerd[1551]: time="2026-01-28T02:10:48.055003280Z" level=info msg="CreateContainer within sandbox \"d8188091ceaa9a3edd2779e52f1bacdf6d14a2002695851a8fe6f546a9d7ac43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 02:10:48.062403 containerd[1551]: time="2026-01-28T02:10:48.062288974Z" level=info msg="connecting to shim 5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e" address="unix:///run/containerd/s/c6670f7ea4d12f30c20863789b508023fe6bc405e9eb5bd6d29797d26ab24123" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:10:48.101412 containerd[1551]: time="2026-01-28T02:10:48.101377573Z" level=info msg="Container ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:10:48.126041 containerd[1551]: time="2026-01-28T02:10:48.125449812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2lth,Uid:59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\"" Jan 28 02:10:48.127091 containerd[1551]: time="2026-01-28T02:10:48.126149871Z" level=info msg="CreateContainer within sandbox \"d8188091ceaa9a3edd2779e52f1bacdf6d14a2002695851a8fe6f546a9d7ac43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d\"" Jan 28 02:10:48.133165 containerd[1551]: time="2026-01-28T02:10:48.133113668Z" level=info msg="StartContainer for \"ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d\"" Jan 28 02:10:48.156301 containerd[1551]: time="2026-01-28T02:10:48.155673635Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 02:10:48.158014 containerd[1551]: time="2026-01-28T02:10:48.157987032Z" level=info msg="connecting to shim ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d" address="unix:///run/containerd/s/41bbdac33dfa732e23ae40edafd03124bad209a509f3a669024e993ef969e234" protocol=ttrpc version=3 Jan 28 02:10:48.177184 systemd[1]: Started cri-containerd-5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e.scope - libcontainer container 5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e. Jan 28 02:10:48.264206 systemd[1]: Started cri-containerd-ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d.scope - libcontainer container ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d. Jan 28 02:10:48.344109 containerd[1551]: time="2026-01-28T02:10:48.344009574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-nfw7c,Uid:072dec10-02b3-4f7a-b4fa-aabda3ec5bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\"" Jan 28 02:10:48.456842 containerd[1551]: time="2026-01-28T02:10:48.455934500Z" level=info msg="StartContainer for \"ea4726a8d39bf88c0f9905bd170ad1689621698230094b4a296e449f9bce539d\" returns successfully" Jan 28 02:10:52.880867 kubelet[2766]: I0128 02:10:52.879488 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mmghq" podStartSLOduration=5.879467871 podStartE2EDuration="5.879467871s" podCreationTimestamp="2026-01-28 02:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:10:48.714862862 +0000 UTC m=+5.679199738" watchObservedRunningTime="2026-01-28 02:10:52.879467871 +0000 UTC m=+9.843804758" Jan 28 02:11:23.835353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951170700.mount: Deactivated successfully. Jan 28 02:11:39.833302 containerd[1551]: time="2026-01-28T02:11:39.832476600Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:11:39.842248 containerd[1551]: time="2026-01-28T02:11:39.842053453Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 28 02:11:39.849990 containerd[1551]: time="2026-01-28T02:11:39.849914837Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:11:39.862028 containerd[1551]: time="2026-01-28T02:11:39.861883376Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 51.705979319s" Jan 28 02:11:39.862028 containerd[1551]: time="2026-01-28T02:11:39.861931345Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 28 02:11:39.873869 containerd[1551]: time="2026-01-28T02:11:39.872071478Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 02:11:39.909844 containerd[1551]: time="2026-01-28T02:11:39.909447667Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 02:11:40.002247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164111565.mount: Deactivated successfully. Jan 28 02:11:40.010034 containerd[1551]: time="2026-01-28T02:11:40.003374127Z" level=info msg="Container 4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:11:40.016321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983367527.mount: Deactivated successfully. Jan 28 02:11:40.070076 containerd[1551]: time="2026-01-28T02:11:40.069856280Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\"" Jan 28 02:11:40.077036 containerd[1551]: time="2026-01-28T02:11:40.075270668Z" level=info msg="StartContainer for \"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\"" Jan 28 02:11:40.086081 containerd[1551]: time="2026-01-28T02:11:40.085458818Z" level=info msg="connecting to shim 4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b" address="unix:///run/containerd/s/40d5b6eb1b6d52a6a39e5cff0eb06fa2775fdbe66757a9ac0acb9a15f614404b" protocol=ttrpc version=3 Jan 28 02:11:40.382880 systemd[1]: Started cri-containerd-4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b.scope - libcontainer container 4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b. Jan 28 02:11:40.721986 containerd[1551]: time="2026-01-28T02:11:40.720020155Z" level=info msg="StartContainer for \"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\" returns successfully" Jan 28 02:11:40.800281 systemd[1]: cri-containerd-4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b.scope: Deactivated successfully. Jan 28 02:11:40.845923 containerd[1551]: time="2026-01-28T02:11:40.845828034Z" level=info msg="received container exit event container_id:\"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\" id:\"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\" pid:3208 exited_at:{seconds:1769566300 nanos:841819269}" Jan 28 02:11:41.118747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b-rootfs.mount: Deactivated successfully. Jan 28 02:11:41.522885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352742333.mount: Deactivated successfully. Jan 28 02:11:41.867941 containerd[1551]: time="2026-01-28T02:11:41.866869657Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 02:11:41.983753 containerd[1551]: time="2026-01-28T02:11:41.982917817Z" level=info msg="Container 841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:11:42.064371 containerd[1551]: time="2026-01-28T02:11:42.064329694Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\"" Jan 28 02:11:42.070020 containerd[1551]: time="2026-01-28T02:11:42.069965865Z" level=info msg="StartContainer for \"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\"" Jan 28 02:11:42.090830 containerd[1551]: time="2026-01-28T02:11:42.090778395Z" level=info msg="connecting to shim 841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7" address="unix:///run/containerd/s/40d5b6eb1b6d52a6a39e5cff0eb06fa2775fdbe66757a9ac0acb9a15f614404b" protocol=ttrpc version=3 Jan 28 02:11:42.402980 systemd[1]: Started cri-containerd-841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7.scope - libcontainer container 841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7. Jan 28 02:11:42.831244 containerd[1551]: time="2026-01-28T02:11:42.826832390Z" level=info msg="StartContainer for \"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\" returns successfully" Jan 28 02:11:42.885774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 02:11:42.886302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:11:42.889421 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:11:42.895328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:11:42.899843 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 02:11:42.903031 systemd[1]: cri-containerd-841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7.scope: Deactivated successfully. Jan 28 02:11:42.904282 containerd[1551]: time="2026-01-28T02:11:42.903829477Z" level=info msg="received container exit event container_id:\"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\" id:\"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\" pid:3268 exited_at:{seconds:1769566302 nanos:902803688}" Jan 28 02:11:43.048275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:11:43.082409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7-rootfs.mount: Deactivated successfully. Jan 28 02:11:43.916061 containerd[1551]: time="2026-01-28T02:11:43.915456211Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 02:11:43.997757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355298338.mount: Deactivated successfully. Jan 28 02:11:44.029307 containerd[1551]: time="2026-01-28T02:11:44.027397548Z" level=info msg="Container 591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:11:44.029235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646400549.mount: Deactivated successfully. Jan 28 02:11:44.089274 containerd[1551]: time="2026-01-28T02:11:44.089222521Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\"" Jan 28 02:11:44.103043 containerd[1551]: time="2026-01-28T02:11:44.102749122Z" level=info msg="StartContainer for \"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\"" Jan 28 02:11:44.110694 containerd[1551]: time="2026-01-28T02:11:44.108315421Z" level=info msg="connecting to shim 591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481" address="unix:///run/containerd/s/40d5b6eb1b6d52a6a39e5cff0eb06fa2775fdbe66757a9ac0acb9a15f614404b" protocol=ttrpc version=3 Jan 28 02:11:44.300347 systemd[1]: Started cri-containerd-591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481.scope - libcontainer container 591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481. Jan 28 02:11:44.669273 containerd[1551]: time="2026-01-28T02:11:44.666921486Z" level=info msg="StartContainer for \"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\" returns successfully" Jan 28 02:11:44.698016 systemd[1]: cri-containerd-591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481.scope: Deactivated successfully. Jan 28 02:11:44.713877 containerd[1551]: time="2026-01-28T02:11:44.711272857Z" level=info msg="received container exit event container_id:\"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\" id:\"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\" pid:3317 exited_at:{seconds:1769566304 nanos:709471389}" Jan 28 02:11:44.991450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481-rootfs.mount: Deactivated successfully. Jan 28 02:11:45.968648 containerd[1551]: time="2026-01-28T02:11:45.966761399Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 02:11:46.085417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932116093.mount: Deactivated successfully. Jan 28 02:11:46.108767 containerd[1551]: time="2026-01-28T02:11:46.107872345Z" level=info msg="Container ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:11:46.108220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818805319.mount: Deactivated successfully. Jan 28 02:11:46.144222 containerd[1551]: time="2026-01-28T02:11:46.144042018Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\"" Jan 28 02:11:46.172738 containerd[1551]: time="2026-01-28T02:11:46.171293844Z" level=info msg="StartContainer for \"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\"" Jan 28 02:11:46.207226 containerd[1551]: time="2026-01-28T02:11:46.204231519Z" level=info msg="connecting to shim ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe" address="unix:///run/containerd/s/40d5b6eb1b6d52a6a39e5cff0eb06fa2775fdbe66757a9ac0acb9a15f614404b" protocol=ttrpc version=3 Jan 28 02:11:46.366918 systemd[1]: Started cri-containerd-ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe.scope - libcontainer container ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe. Jan 28 02:11:46.613898 systemd[1]: cri-containerd-ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe.scope: Deactivated successfully. Jan 28 02:11:46.631448 containerd[1551]: time="2026-01-28T02:11:46.631237224Z" level=info msg="received container exit event container_id:\"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\" id:\"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\" pid:3356 exited_at:{seconds:1769566306 nanos:628870328}" Jan 28 02:11:46.707034 containerd[1551]: time="2026-01-28T02:11:46.706427149Z" level=info msg="StartContainer for \"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\" returns successfully" Jan 28 02:11:47.046425 containerd[1551]: time="2026-01-28T02:11:47.045789401Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 02:11:47.080473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe-rootfs.mount: Deactivated successfully. Jan 28 02:11:47.177737 containerd[1551]: time="2026-01-28T02:11:47.171806237Z" level=info msg="Container 6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:11:47.223475 containerd[1551]: time="2026-01-28T02:11:47.221863335Z" level=info msg="CreateContainer within sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\"" Jan 28 02:11:47.231374 containerd[1551]: time="2026-01-28T02:11:47.230969864Z" level=info msg="StartContainer for \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\"" Jan 28 02:11:47.274482 containerd[1551]: time="2026-01-28T02:11:47.273252907Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:11:47.276982 containerd[1551]: time="2026-01-28T02:11:47.276812951Z" level=info msg="connecting to shim 6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc" address="unix:///run/containerd/s/40d5b6eb1b6d52a6a39e5cff0eb06fa2775fdbe66757a9ac0acb9a15f614404b" protocol=ttrpc version=3 Jan 28 02:11:47.286977 containerd[1551]: time="2026-01-28T02:11:47.284477148Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 28 02:11:47.293472 containerd[1551]: time="2026-01-28T02:11:47.293269905Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:11:47.299880 containerd[1551]: time="2026-01-28T02:11:47.298928017Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.42666789s" Jan 28 02:11:47.299880 containerd[1551]: time="2026-01-28T02:11:47.298973150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 28 02:11:47.361245 containerd[1551]: time="2026-01-28T02:11:47.361202575Z" level=info msg="CreateContainer within sandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 02:11:47.388981 systemd[1]: Started cri-containerd-6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc.scope - libcontainer container 6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc. Jan 28 02:11:47.433998 containerd[1551]: time="2026-01-28T02:11:47.431909811Z" level=info msg="Container f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:11:47.500021 containerd[1551]: time="2026-01-28T02:11:47.499973389Z" level=info msg="CreateContainer within sandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\"" Jan 28 02:11:47.511739 containerd[1551]: time="2026-01-28T02:11:47.510912582Z" level=info msg="StartContainer for \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\"" Jan 28 02:11:47.517173 containerd[1551]: time="2026-01-28T02:11:47.515282807Z" level=info msg="connecting to shim f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978" address="unix:///run/containerd/s/c6670f7ea4d12f30c20863789b508023fe6bc405e9eb5bd6d29797d26ab24123" protocol=ttrpc version=3 Jan 28 02:11:47.726952 systemd[1]: Started cri-containerd-f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978.scope - libcontainer container f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978. Jan 28 02:11:47.848802 containerd[1551]: time="2026-01-28T02:11:47.846011304Z" level=info msg="StartContainer for \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" returns successfully" Jan 28 02:11:48.172809 containerd[1551]: time="2026-01-28T02:11:48.172226654Z" level=info msg="StartContainer for \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" returns successfully" Jan 28 02:11:48.518182 kubelet[2766]: I0128 02:11:48.518011 2766 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 02:11:48.839976 systemd[1]: Created slice kubepods-burstable-pod8108323e_a35e_4e84_b9ed_56cac91622a8.slice - libcontainer container kubepods-burstable-pod8108323e_a35e_4e84_b9ed_56cac91622a8.slice. Jan 28 02:11:48.893305 systemd[1]: Created slice kubepods-burstable-podc42e31f9_7fe6_4293_821c_c548db85af15.slice - libcontainer container kubepods-burstable-podc42e31f9_7fe6_4293_821c_c548db85af15.slice. Jan 28 02:11:48.959778 kubelet[2766]: I0128 02:11:48.959440 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trnhb\" (UniqueName: \"kubernetes.io/projected/c42e31f9-7fe6-4293-821c-c548db85af15-kube-api-access-trnhb\") pod \"coredns-66bc5c9577-vpx5f\" (UID: \"c42e31f9-7fe6-4293-821c-c548db85af15\") " pod="kube-system/coredns-66bc5c9577-vpx5f" Jan 28 02:11:48.961349 kubelet[2766]: I0128 02:11:48.961322 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42e31f9-7fe6-4293-821c-c548db85af15-config-volume\") pod \"coredns-66bc5c9577-vpx5f\" (UID: \"c42e31f9-7fe6-4293-821c-c548db85af15\") " pod="kube-system/coredns-66bc5c9577-vpx5f" Jan 28 02:11:48.962795 kubelet[2766]: I0128 02:11:48.962772 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcchg\" (UniqueName: \"kubernetes.io/projected/8108323e-a35e-4e84-b9ed-56cac91622a8-kube-api-access-wcchg\") pod \"coredns-66bc5c9577-jf575\" (UID: \"8108323e-a35e-4e84-b9ed-56cac91622a8\") " pod="kube-system/coredns-66bc5c9577-jf575" Jan 28 02:11:48.962901 kubelet[2766]: I0128 02:11:48.962885 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8108323e-a35e-4e84-b9ed-56cac91622a8-config-volume\") pod \"coredns-66bc5c9577-jf575\" (UID: \"8108323e-a35e-4e84-b9ed-56cac91622a8\") " pod="kube-system/coredns-66bc5c9577-jf575" Jan 28 02:11:49.253771 containerd[1551]: time="2026-01-28T02:11:49.252854638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vpx5f,Uid:c42e31f9-7fe6-4293-821c-c548db85af15,Namespace:kube-system,Attempt:0,}" Jan 28 02:11:49.550482 containerd[1551]: time="2026-01-28T02:11:49.548237995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jf575,Uid:8108323e-a35e-4e84-b9ed-56cac91622a8,Namespace:kube-system,Attempt:0,}" Jan 28 02:11:49.564428 kubelet[2766]: I0128 02:11:49.563998 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-nfw7c" podStartSLOduration=3.606009888 podStartE2EDuration="1m2.563977058s" podCreationTimestamp="2026-01-28 02:10:47 +0000 UTC" firstStartedPulling="2026-01-28 02:10:48.350362139 +0000 UTC m=+5.314699017" lastFinishedPulling="2026-01-28 02:11:47.3083293 +0000 UTC m=+64.272666187" observedRunningTime="2026-01-28 02:11:49.516419585 +0000 UTC m=+66.480756472" watchObservedRunningTime="2026-01-28 02:11:49.563977058 +0000 UTC m=+66.528313945" Jan 28 02:11:50.205674 kubelet[2766]: I0128 02:11:50.204971 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s2lth" podStartSLOduration=11.485376228 podStartE2EDuration="1m3.204945954s" podCreationTimestamp="2026-01-28 02:10:47 +0000 UTC" firstStartedPulling="2026-01-28 02:10:48.146905441 +0000 UTC m=+5.111242318" lastFinishedPulling="2026-01-28 02:11:39.866475167 +0000 UTC m=+56.830812044" observedRunningTime="2026-01-28 02:11:50.17132326 +0000 UTC m=+67.135660147" watchObservedRunningTime="2026-01-28 02:11:50.204945954 +0000 UTC m=+67.169282831" Jan 28 02:11:57.646438 systemd-networkd[1469]: cilium_host: Link UP Jan 28 02:11:57.656304 systemd-networkd[1469]: cilium_net: Link UP Jan 28 02:11:57.656956 systemd-networkd[1469]: cilium_net: Gained carrier Jan 28 02:11:57.657334 systemd-networkd[1469]: cilium_host: Gained carrier Jan 28 02:11:58.048804 systemd-networkd[1469]: cilium_host: Gained IPv6LL Jan 28 02:11:58.333272 systemd-networkd[1469]: cilium_net: Gained IPv6LL Jan 28 02:11:59.063896 systemd-networkd[1469]: cilium_vxlan: Link UP Jan 28 02:11:59.063913 systemd-networkd[1469]: cilium_vxlan: Gained carrier Jan 28 02:12:00.388689 systemd-networkd[1469]: cilium_vxlan: Gained IPv6LL Jan 28 02:12:00.482891 kernel: NET: Registered PF_ALG protocol family Jan 28 02:12:05.705206 systemd-networkd[1469]: lxc_health: Link UP Jan 28 02:12:05.722287 systemd-networkd[1469]: lxc_health: Gained carrier Jan 28 02:12:06.305870 systemd-networkd[1469]: lxcf6a5137eccc9: Link UP Jan 28 02:12:06.322831 kernel: eth0: renamed from tmp75132 Jan 28 02:12:06.328783 systemd-networkd[1469]: lxcf6a5137eccc9: Gained carrier Jan 28 02:12:06.558884 systemd-networkd[1469]: lxcfc5515355f30: Link UP Jan 28 02:12:06.584298 kernel: eth0: renamed from tmpd1ed3 Jan 28 02:12:06.633416 systemd-networkd[1469]: lxcfc5515355f30: Gained carrier Jan 28 02:12:06.781386 systemd-networkd[1469]: lxc_health: Gained IPv6LL Jan 28 02:12:07.677455 systemd-networkd[1469]: lxcf6a5137eccc9: Gained IPv6LL Jan 28 02:12:08.701120 systemd-networkd[1469]: lxcfc5515355f30: Gained IPv6LL Jan 28 02:12:14.390141 containerd[1551]: time="2026-01-28T02:12:14.390087774Z" level=info msg="connecting to shim 75132f4691b066e933bfef70b5eb91814304a06f096d448ed5bf59e59e0db88c" address="unix:///run/containerd/s/7c9d8d8157700071d35a2e599df074d5a5a495508088bdd1a78ac14823ea5042" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:12:14.403263 containerd[1551]: time="2026-01-28T02:12:14.403190047Z" level=info msg="connecting to shim d1ed37d1fc51e83467585a345312b461166d7947c63453b39fdd966eadb3c175" address="unix:///run/containerd/s/87b67ef8ed9bc645170a866db6bf3e2d19c7778321a798f2316c7ce4b143e0d4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:12:14.505133 systemd[1]: Started cri-containerd-75132f4691b066e933bfef70b5eb91814304a06f096d448ed5bf59e59e0db88c.scope - libcontainer container 75132f4691b066e933bfef70b5eb91814304a06f096d448ed5bf59e59e0db88c. Jan 28 02:12:14.508785 systemd[1]: Started cri-containerd-d1ed37d1fc51e83467585a345312b461166d7947c63453b39fdd966eadb3c175.scope - libcontainer container d1ed37d1fc51e83467585a345312b461166d7947c63453b39fdd966eadb3c175. Jan 28 02:12:14.550042 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:12:14.555046 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:12:14.674886 containerd[1551]: time="2026-01-28T02:12:14.674690599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jf575,Uid:8108323e-a35e-4e84-b9ed-56cac91622a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1ed37d1fc51e83467585a345312b461166d7947c63453b39fdd966eadb3c175\"" Jan 28 02:12:14.714761 containerd[1551]: time="2026-01-28T02:12:14.713709450Z" level=info msg="CreateContainer within sandbox \"d1ed37d1fc51e83467585a345312b461166d7947c63453b39fdd966eadb3c175\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:12:14.726780 containerd[1551]: time="2026-01-28T02:12:14.726434964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vpx5f,Uid:c42e31f9-7fe6-4293-821c-c548db85af15,Namespace:kube-system,Attempt:0,} returns sandbox id \"75132f4691b066e933bfef70b5eb91814304a06f096d448ed5bf59e59e0db88c\"" Jan 28 02:12:14.757676 containerd[1551]: time="2026-01-28T02:12:14.757638001Z" level=info msg="CreateContainer within sandbox \"75132f4691b066e933bfef70b5eb91814304a06f096d448ed5bf59e59e0db88c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:12:14.793838 containerd[1551]: time="2026-01-28T02:12:14.793435533Z" level=info msg="Container 94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:12:14.808089 containerd[1551]: time="2026-01-28T02:12:14.807834354Z" level=info msg="Container caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:12:14.820839 containerd[1551]: time="2026-01-28T02:12:14.820731322Z" level=info msg="CreateContainer within sandbox \"d1ed37d1fc51e83467585a345312b461166d7947c63453b39fdd966eadb3c175\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55\"" Jan 28 02:12:14.824683 containerd[1551]: time="2026-01-28T02:12:14.822651864Z" level=info msg="StartContainer for \"94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55\"" Jan 28 02:12:14.825331 containerd[1551]: time="2026-01-28T02:12:14.825303541Z" level=info msg="connecting to shim 94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55" address="unix:///run/containerd/s/87b67ef8ed9bc645170a866db6bf3e2d19c7778321a798f2316c7ce4b143e0d4" protocol=ttrpc version=3 Jan 28 02:12:14.851723 containerd[1551]: time="2026-01-28T02:12:14.850672198Z" level=info msg="CreateContainer within sandbox \"75132f4691b066e933bfef70b5eb91814304a06f096d448ed5bf59e59e0db88c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c\"" Jan 28 02:12:14.854385 containerd[1551]: time="2026-01-28T02:12:14.852869456Z" level=info msg="StartContainer for \"caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c\"" Jan 28 02:12:14.857264 containerd[1551]: time="2026-01-28T02:12:14.857233767Z" level=info msg="connecting to shim caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c" address="unix:///run/containerd/s/7c9d8d8157700071d35a2e599df074d5a5a495508088bdd1a78ac14823ea5042" protocol=ttrpc version=3 Jan 28 02:12:14.910781 systemd[1]: Started cri-containerd-94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55.scope - libcontainer container 94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55. Jan 28 02:12:14.949082 systemd[1]: Started cri-containerd-caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c.scope - libcontainer container caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c. Jan 28 02:12:15.101780 containerd[1551]: time="2026-01-28T02:12:15.101314494Z" level=info msg="StartContainer for \"caa13c1d6066eefbebcd1132003e270f4a57bf142c02eb6f710a6417ca04b94c\" returns successfully" Jan 28 02:12:15.101780 containerd[1551]: time="2026-01-28T02:12:15.101703328Z" level=info msg="StartContainer for \"94a5a59725ff875b3da879f953533cd3c51038285d625601ade3be38f6a78a55\" returns successfully" Jan 28 02:12:15.708784 kubelet[2766]: I0128 02:12:15.708116 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jf575" podStartSLOduration=88.708091348 podStartE2EDuration="1m28.708091348s" podCreationTimestamp="2026-01-28 02:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:12:15.665352084 +0000 UTC m=+92.629688971" watchObservedRunningTime="2026-01-28 02:12:15.708091348 +0000 UTC m=+92.672428285" Jan 28 02:12:15.708784 kubelet[2766]: I0128 02:12:15.708233 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vpx5f" podStartSLOduration=88.708222463 podStartE2EDuration="1m28.708222463s" podCreationTimestamp="2026-01-28 02:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:12:15.704248277 +0000 UTC m=+92.668585154" watchObservedRunningTime="2026-01-28 02:12:15.708222463 +0000 UTC m=+92.672559340" Jan 28 02:13:15.514068 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:45334.service - OpenSSH per-connection server daemon (10.0.0.1:45334). Jan 28 02:13:15.704911 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 45334 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:15.709025 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:15.728876 systemd-logind[1534]: New session 8 of user core. Jan 28 02:13:15.741890 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 02:13:16.362853 sshd[4136]: Connection closed by 10.0.0.1 port 45334 Jan 28 02:13:16.363389 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:16.379352 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:45334.service: Deactivated successfully. Jan 28 02:13:16.385828 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 02:13:16.390844 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Jan 28 02:13:16.396350 systemd-logind[1534]: Removed session 8. Jan 28 02:13:21.394767 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Jan 28 02:13:21.545044 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:21.547933 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:21.568380 systemd-logind[1534]: New session 9 of user core. Jan 28 02:13:21.579970 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 02:13:22.001089 sshd[4155]: Connection closed by 10.0.0.1 port 45342 Jan 28 02:13:22.002757 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:22.014157 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:45342.service: Deactivated successfully. Jan 28 02:13:22.020429 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 02:13:22.029917 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Jan 28 02:13:22.039045 systemd-logind[1534]: Removed session 9. Jan 28 02:13:27.064060 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:34448.service - OpenSSH per-connection server daemon (10.0.0.1:34448). Jan 28 02:13:27.228868 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 34448 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:27.241139 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:27.283391 systemd-logind[1534]: New session 10 of user core. Jan 28 02:13:27.302470 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 02:13:27.900888 sshd[4172]: Connection closed by 10.0.0.1 port 34448 Jan 28 02:13:27.901757 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:27.932406 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:34448.service: Deactivated successfully. Jan 28 02:13:27.937435 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 02:13:27.945371 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Jan 28 02:13:27.953197 systemd-logind[1534]: Removed session 10. Jan 28 02:13:32.919091 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:34452.service - OpenSSH per-connection server daemon (10.0.0.1:34452). Jan 28 02:13:33.079485 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 34452 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:33.082874 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:33.100703 systemd-logind[1534]: New session 11 of user core. Jan 28 02:13:33.109452 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 02:13:33.541787 sshd[4189]: Connection closed by 10.0.0.1 port 34452 Jan 28 02:13:33.543013 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:33.554952 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:34452.service: Deactivated successfully. Jan 28 02:13:33.561476 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 02:13:33.565142 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Jan 28 02:13:33.570914 systemd-logind[1534]: Removed session 11. Jan 28 02:13:38.579147 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:55156.service - OpenSSH per-connection server daemon (10.0.0.1:55156). Jan 28 02:13:38.725176 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 55156 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:38.729142 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:38.749738 systemd-logind[1534]: New session 12 of user core. Jan 28 02:13:38.762964 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 02:13:39.138971 sshd[4207]: Connection closed by 10.0.0.1 port 55156 Jan 28 02:13:39.139942 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:39.167954 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:55156.service: Deactivated successfully. Jan 28 02:13:39.173927 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 02:13:39.177103 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Jan 28 02:13:39.183100 systemd-logind[1534]: Removed session 12. Jan 28 02:13:44.180164 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:55160.service - OpenSSH per-connection server daemon (10.0.0.1:55160). Jan 28 02:13:44.408932 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:44.410898 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:44.437150 systemd-logind[1534]: New session 13 of user core. Jan 28 02:13:44.449166 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 02:13:44.919706 sshd[4226]: Connection closed by 10.0.0.1 port 55160 Jan 28 02:13:44.925766 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:44.947054 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Jan 28 02:13:44.948813 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:55160.service: Deactivated successfully. Jan 28 02:13:44.963142 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 02:13:44.970455 systemd-logind[1534]: Removed session 13. Jan 28 02:13:49.954092 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:55478.service - OpenSSH per-connection server daemon (10.0.0.1:55478). Jan 28 02:13:50.198836 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 55478 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:50.208035 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:50.254752 systemd-logind[1534]: New session 14 of user core. Jan 28 02:13:50.276193 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 02:13:50.735079 sshd[4247]: Connection closed by 10.0.0.1 port 55478 Jan 28 02:13:50.737105 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:50.760219 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:55478.service: Deactivated successfully. Jan 28 02:13:50.770755 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 02:13:50.777000 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Jan 28 02:13:50.788178 systemd-logind[1534]: Removed session 14. Jan 28 02:13:55.764933 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:59034.service - OpenSSH per-connection server daemon (10.0.0.1:59034). Jan 28 02:13:55.934989 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 59034 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:13:55.939913 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:13:55.974794 systemd-logind[1534]: New session 15 of user core. Jan 28 02:13:55.992980 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 02:13:56.534451 sshd[4265]: Connection closed by 10.0.0.1 port 59034 Jan 28 02:13:56.535407 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jan 28 02:13:56.549059 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:59034.service: Deactivated successfully. Jan 28 02:13:56.555801 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 02:13:56.559980 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Jan 28 02:13:56.568427 systemd-logind[1534]: Removed session 15. Jan 28 02:14:01.617152 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:59050.service - OpenSSH per-connection server daemon (10.0.0.1:59050). Jan 28 02:14:01.905220 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 59050 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:01.919106 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:01.991927 systemd-logind[1534]: New session 16 of user core. Jan 28 02:14:02.002070 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 02:14:02.772122 sshd[4286]: Connection closed by 10.0.0.1 port 59050 Jan 28 02:14:02.773435 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:02.792182 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:59050.service: Deactivated successfully. Jan 28 02:14:02.798143 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 02:14:02.801753 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Jan 28 02:14:02.816749 systemd-logind[1534]: Removed session 16. Jan 28 02:14:07.826868 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:42366.service - OpenSSH per-connection server daemon (10.0.0.1:42366). Jan 28 02:14:08.083454 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 42366 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:08.088195 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:08.122932 systemd-logind[1534]: New session 17 of user core. Jan 28 02:14:08.149025 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 02:14:08.807817 sshd[4303]: Connection closed by 10.0.0.1 port 42366 Jan 28 02:14:08.809221 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:08.834791 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:42366.service: Deactivated successfully. Jan 28 02:14:08.851932 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 02:14:08.880422 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Jan 28 02:14:08.898074 systemd-logind[1534]: Removed session 17. Jan 28 02:14:13.833110 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:42368.service - OpenSSH per-connection server daemon (10.0.0.1:42368). Jan 28 02:14:13.947187 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 42368 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:13.951124 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:13.979129 systemd-logind[1534]: New session 18 of user core. Jan 28 02:14:13.987165 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 02:14:14.242422 sshd[4320]: Connection closed by 10.0.0.1 port 42368 Jan 28 02:14:14.241087 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:14.257244 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:42368.service: Deactivated successfully. Jan 28 02:14:14.260651 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 02:14:14.265705 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Jan 28 02:14:14.274046 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:42376.service - OpenSSH per-connection server daemon (10.0.0.1:42376). Jan 28 02:14:14.277098 systemd-logind[1534]: Removed session 18. Jan 28 02:14:14.385191 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 42376 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:14.388972 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:14.409007 systemd-logind[1534]: New session 19 of user core. Jan 28 02:14:14.423474 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 02:14:14.777102 sshd[4337]: Connection closed by 10.0.0.1 port 42376 Jan 28 02:14:14.778868 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:14.792189 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:42376.service: Deactivated successfully. Jan 28 02:14:14.795392 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 02:14:14.797460 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Jan 28 02:14:14.807891 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:56632.service - OpenSSH per-connection server daemon (10.0.0.1:56632). Jan 28 02:14:14.811259 systemd-logind[1534]: Removed session 19. Jan 28 02:14:14.965653 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 56632 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:14.970992 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:14.990788 systemd-logind[1534]: New session 20 of user core. Jan 28 02:14:14.998983 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 02:14:15.243158 sshd[4352]: Connection closed by 10.0.0.1 port 56632 Jan 28 02:14:15.243609 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:15.256962 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:56632.service: Deactivated successfully. Jan 28 02:14:15.265362 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 02:14:15.269878 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Jan 28 02:14:15.272896 systemd-logind[1534]: Removed session 20. Jan 28 02:14:20.276103 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:56640.service - OpenSSH per-connection server daemon (10.0.0.1:56640). Jan 28 02:14:20.368016 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 56640 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:20.371277 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:20.384065 systemd-logind[1534]: New session 21 of user core. Jan 28 02:14:20.397897 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 02:14:20.699426 sshd[4372]: Connection closed by 10.0.0.1 port 56640 Jan 28 02:14:20.699806 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:20.706809 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:56640.service: Deactivated successfully. Jan 28 02:14:20.709834 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 02:14:20.713114 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Jan 28 02:14:20.718176 systemd-logind[1534]: Removed session 21. Jan 28 02:14:25.717390 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:56428.service - OpenSSH per-connection server daemon (10.0.0.1:56428). Jan 28 02:14:25.816474 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 56428 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:25.818403 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:25.834601 systemd-logind[1534]: New session 22 of user core. Jan 28 02:14:25.853073 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 02:14:26.063766 sshd[4388]: Connection closed by 10.0.0.1 port 56428 Jan 28 02:14:26.064109 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:26.072036 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:56428.service: Deactivated successfully. Jan 28 02:14:26.076035 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 02:14:26.078449 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Jan 28 02:14:26.082670 systemd-logind[1534]: Removed session 22. Jan 28 02:14:26.582361 kubelet[2766]: E0128 02:14:26.582152 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:14:31.087276 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:56444.service - OpenSSH per-connection server daemon (10.0.0.1:56444). Jan 28 02:14:31.184257 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 56444 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:31.186470 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:31.198627 systemd-logind[1534]: New session 23 of user core. Jan 28 02:14:31.211902 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 02:14:31.464266 sshd[4406]: Connection closed by 10.0.0.1 port 56444 Jan 28 02:14:31.464705 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:31.469909 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:56444.service: Deactivated successfully. Jan 28 02:14:31.472140 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 02:14:31.473980 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Jan 28 02:14:31.476788 systemd-logind[1534]: Removed session 23. Jan 28 02:14:33.587396 kubelet[2766]: E0128 02:14:33.587110 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:14:36.503180 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:35450.service - OpenSSH per-connection server daemon (10.0.0.1:35450). Jan 28 02:14:36.712365 sshd[4419]: Accepted publickey for core from 10.0.0.1 port 35450 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:36.715113 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:36.742682 systemd-logind[1534]: New session 24 of user core. Jan 28 02:14:36.770895 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 02:14:37.138194 sshd[4422]: Connection closed by 10.0.0.1 port 35450 Jan 28 02:14:37.138770 sshd-session[4419]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:37.149810 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Jan 28 02:14:37.152083 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:35450.service: Deactivated successfully. Jan 28 02:14:37.160471 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 02:14:37.167817 systemd-logind[1534]: Removed session 24. Jan 28 02:14:42.160933 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:35464.service - OpenSSH per-connection server daemon (10.0.0.1:35464). Jan 28 02:14:42.247855 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 35464 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:42.252071 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:42.263767 systemd-logind[1534]: New session 25 of user core. Jan 28 02:14:42.282210 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 02:14:42.486149 sshd[4439]: Connection closed by 10.0.0.1 port 35464 Jan 28 02:14:42.486682 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:42.494470 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:35464.service: Deactivated successfully. Jan 28 02:14:42.498372 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 02:14:42.502401 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Jan 28 02:14:42.505949 systemd-logind[1534]: Removed session 25. Jan 28 02:14:42.581698 kubelet[2766]: E0128 02:14:42.581365 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:14:47.508168 systemd[1]: Started sshd@25-10.0.0.150:22-10.0.0.1:50640.service - OpenSSH per-connection server daemon (10.0.0.1:50640). Jan 28 02:14:47.595080 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 50640 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:47.598943 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:47.608747 systemd-logind[1534]: New session 26 of user core. Jan 28 02:14:47.619953 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 02:14:47.819018 sshd[4459]: Connection closed by 10.0.0.1 port 50640 Jan 28 02:14:47.820806 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:47.832841 systemd[1]: sshd@25-10.0.0.150:22-10.0.0.1:50640.service: Deactivated successfully. Jan 28 02:14:47.836937 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 02:14:47.840215 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Jan 28 02:14:47.847191 systemd[1]: Started sshd@26-10.0.0.150:22-10.0.0.1:50654.service - OpenSSH per-connection server daemon (10.0.0.1:50654). Jan 28 02:14:47.850763 systemd-logind[1534]: Removed session 26. Jan 28 02:14:47.942106 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 50654 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:47.945802 sshd-session[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:47.958096 systemd-logind[1534]: New session 27 of user core. Jan 28 02:14:47.968142 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 02:14:48.515140 sshd[4476]: Connection closed by 10.0.0.1 port 50654 Jan 28 02:14:48.516734 sshd-session[4473]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:48.529106 systemd[1]: sshd@26-10.0.0.150:22-10.0.0.1:50654.service: Deactivated successfully. Jan 28 02:14:48.533909 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 02:14:48.536819 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Jan 28 02:14:48.542653 systemd[1]: Started sshd@27-10.0.0.150:22-10.0.0.1:50666.service - OpenSSH per-connection server daemon (10.0.0.1:50666). Jan 28 02:14:48.544847 systemd-logind[1534]: Removed session 27. Jan 28 02:14:48.657141 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 50666 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:48.659762 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:48.703994 systemd-logind[1534]: New session 28 of user core. Jan 28 02:14:48.741239 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 02:14:50.184427 sshd[4492]: Connection closed by 10.0.0.1 port 50666 Jan 28 02:14:50.182964 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:50.204019 systemd[1]: sshd@27-10.0.0.150:22-10.0.0.1:50666.service: Deactivated successfully. Jan 28 02:14:50.209178 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 02:14:50.209805 systemd[1]: session-28.scope: Consumed 1.065s CPU time, 37M memory peak. Jan 28 02:14:50.215682 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Jan 28 02:14:50.219675 systemd[1]: Started sshd@28-10.0.0.150:22-10.0.0.1:50672.service - OpenSSH per-connection server daemon (10.0.0.1:50672). Jan 28 02:14:50.230916 systemd-logind[1534]: Removed session 28. Jan 28 02:14:50.352779 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 50672 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:50.355164 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:50.375224 systemd-logind[1534]: New session 29 of user core. Jan 28 02:14:50.388090 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 02:14:51.147813 sshd[4515]: Connection closed by 10.0.0.1 port 50672 Jan 28 02:14:51.148726 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:51.168147 systemd[1]: sshd@28-10.0.0.150:22-10.0.0.1:50672.service: Deactivated successfully. Jan 28 02:14:51.175922 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 02:14:51.182826 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Jan 28 02:14:51.197752 systemd[1]: Started sshd@29-10.0.0.150:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Jan 28 02:14:51.204260 systemd-logind[1534]: Removed session 29. Jan 28 02:14:51.353722 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:51.357430 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:51.375825 systemd-logind[1534]: New session 30 of user core. Jan 28 02:14:51.388414 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 02:14:51.702077 sshd[4531]: Connection closed by 10.0.0.1 port 50688 Jan 28 02:14:51.703191 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:51.711402 systemd[1]: sshd@29-10.0.0.150:22-10.0.0.1:50688.service: Deactivated successfully. Jan 28 02:14:51.716009 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 02:14:51.722214 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Jan 28 02:14:51.727948 systemd-logind[1534]: Removed session 30. Jan 28 02:14:53.591035 kubelet[2766]: E0128 02:14:53.590256 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:14:53.595186 kubelet[2766]: E0128 02:14:53.595079 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:14:56.730404 systemd[1]: Started sshd@30-10.0.0.150:22-10.0.0.1:40328.service - OpenSSH per-connection server daemon (10.0.0.1:40328). Jan 28 02:14:56.888802 sshd[4546]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:14:56.896028 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:14:56.914402 systemd-logind[1534]: New session 31 of user core. Jan 28 02:14:56.931214 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 02:14:57.415866 sshd[4549]: Connection closed by 10.0.0.1 port 40328 Jan 28 02:14:57.416678 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jan 28 02:14:57.429100 systemd[1]: sshd@30-10.0.0.150:22-10.0.0.1:40328.service: Deactivated successfully. Jan 28 02:14:57.437101 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 02:14:57.445077 systemd-logind[1534]: Session 31 logged out. Waiting for processes to exit. Jan 28 02:14:57.473191 systemd-logind[1534]: Removed session 31. Jan 28 02:15:02.459059 systemd[1]: Started sshd@31-10.0.0.150:22-10.0.0.1:40330.service - OpenSSH per-connection server daemon (10.0.0.1:40330). Jan 28 02:15:02.623747 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 40330 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:02.626696 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:02.653438 systemd-logind[1534]: New session 32 of user core. Jan 28 02:15:02.683052 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 02:15:03.163811 sshd[4566]: Connection closed by 10.0.0.1 port 40330 Jan 28 02:15:03.162773 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:03.174845 systemd-logind[1534]: Session 32 logged out. Waiting for processes to exit. Jan 28 02:15:03.178908 systemd[1]: sshd@31-10.0.0.150:22-10.0.0.1:40330.service: Deactivated successfully. Jan 28 02:15:03.189467 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 02:15:03.201438 systemd-logind[1534]: Removed session 32. Jan 28 02:15:08.172927 systemd[1]: Started sshd@32-10.0.0.150:22-10.0.0.1:60626.service - OpenSSH per-connection server daemon (10.0.0.1:60626). Jan 28 02:15:08.263976 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 60626 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:08.266788 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:08.281834 systemd-logind[1534]: New session 33 of user core. Jan 28 02:15:08.298925 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 02:15:08.563037 sshd[4582]: Connection closed by 10.0.0.1 port 60626 Jan 28 02:15:08.564870 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:08.576226 systemd[1]: sshd@32-10.0.0.150:22-10.0.0.1:60626.service: Deactivated successfully. Jan 28 02:15:08.582025 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 02:15:08.585073 systemd-logind[1534]: Session 33 logged out. Waiting for processes to exit. Jan 28 02:15:08.589963 systemd-logind[1534]: Removed session 33. Jan 28 02:15:10.582979 kubelet[2766]: E0128 02:15:10.582750 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:13.586009 systemd[1]: Started sshd@33-10.0.0.150:22-10.0.0.1:60632.service - OpenSSH per-connection server daemon (10.0.0.1:60632). Jan 28 02:15:13.718840 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 60632 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:13.723225 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:13.740749 systemd-logind[1534]: New session 34 of user core. Jan 28 02:15:13.747037 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 02:15:14.113941 sshd[4601]: Connection closed by 10.0.0.1 port 60632 Jan 28 02:15:14.115792 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:14.126211 systemd[1]: sshd@33-10.0.0.150:22-10.0.0.1:60632.service: Deactivated successfully. Jan 28 02:15:14.135042 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 02:15:14.138416 systemd-logind[1534]: Session 34 logged out. Waiting for processes to exit. Jan 28 02:15:14.142948 systemd-logind[1534]: Removed session 34. Jan 28 02:15:19.140451 systemd[1]: Started sshd@34-10.0.0.150:22-10.0.0.1:58534.service - OpenSSH per-connection server daemon (10.0.0.1:58534). Jan 28 02:15:19.282850 sshd[4616]: Accepted publickey for core from 10.0.0.1 port 58534 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:19.285850 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:19.314775 systemd-logind[1534]: New session 35 of user core. Jan 28 02:15:19.326051 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 02:15:19.677876 sshd[4619]: Connection closed by 10.0.0.1 port 58534 Jan 28 02:15:19.682903 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:19.704202 systemd[1]: sshd@34-10.0.0.150:22-10.0.0.1:58534.service: Deactivated successfully. Jan 28 02:15:19.708082 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 02:15:19.713458 systemd-logind[1534]: Session 35 logged out. Waiting for processes to exit. Jan 28 02:15:19.724404 systemd[1]: Started sshd@35-10.0.0.150:22-10.0.0.1:58536.service - OpenSSH per-connection server daemon (10.0.0.1:58536). Jan 28 02:15:19.728196 systemd-logind[1534]: Removed session 35. Jan 28 02:15:19.872432 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 58536 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:19.877409 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:19.900874 systemd-logind[1534]: New session 36 of user core. Jan 28 02:15:19.913055 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 02:15:22.108944 containerd[1551]: time="2026-01-28T02:15:22.107819283Z" level=info msg="StopContainer for \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" with timeout 30 (s)" Jan 28 02:15:22.183971 containerd[1551]: time="2026-01-28T02:15:22.183854721Z" level=info msg="Stop container \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" with signal terminated" Jan 28 02:15:22.267964 systemd[1]: cri-containerd-f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978.scope: Deactivated successfully. Jan 28 02:15:22.271914 systemd[1]: cri-containerd-f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978.scope: Consumed 2.551s CPU time, 27.8M memory peak, 4K written to disk. Jan 28 02:15:22.305939 containerd[1551]: time="2026-01-28T02:15:22.305423839Z" level=info msg="received container exit event container_id:\"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" id:\"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" pid:3422 exited_at:{seconds:1769566522 nanos:300185138}" Jan 28 02:15:22.333954 containerd[1551]: time="2026-01-28T02:15:22.333908678Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 02:15:22.355909 containerd[1551]: time="2026-01-28T02:15:22.353970366Z" level=info msg="StopContainer for \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" with timeout 2 (s)" Jan 28 02:15:22.359707 containerd[1551]: time="2026-01-28T02:15:22.359211068Z" level=info msg="Stop container \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" with signal terminated" Jan 28 02:15:22.416771 systemd-networkd[1469]: lxc_health: Link DOWN Jan 28 02:15:22.416785 systemd-networkd[1469]: lxc_health: Lost carrier Jan 28 02:15:22.428247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978-rootfs.mount: Deactivated successfully. Jan 28 02:15:22.483845 systemd[1]: cri-containerd-6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc.scope: Deactivated successfully. Jan 28 02:15:22.485084 systemd[1]: cri-containerd-6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc.scope: Consumed 22.437s CPU time, 127.3M memory peak, 208K read from disk, 13.3M written to disk. Jan 28 02:15:22.500734 containerd[1551]: time="2026-01-28T02:15:22.498943287Z" level=info msg="received container exit event container_id:\"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" id:\"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" pid:3400 exited_at:{seconds:1769566522 nanos:495839251}" Jan 28 02:15:22.515485 containerd[1551]: time="2026-01-28T02:15:22.515113791Z" level=info msg="StopContainer for \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" returns successfully" Jan 28 02:15:22.536888 containerd[1551]: time="2026-01-28T02:15:22.536843521Z" level=info msg="StopPodSandbox for \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\"" Jan 28 02:15:22.541458 containerd[1551]: time="2026-01-28T02:15:22.540757002Z" level=info msg="Container to stop \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 02:15:22.613139 systemd[1]: cri-containerd-5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e.scope: Deactivated successfully. Jan 28 02:15:22.635793 containerd[1551]: time="2026-01-28T02:15:22.635749458Z" level=info msg="received sandbox exit event container_id:\"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" id:\"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" exit_status:137 exited_at:{seconds:1769566522 nanos:634674229}" monitor_name=podsandbox Jan 28 02:15:22.677159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc-rootfs.mount: Deactivated successfully. Jan 28 02:15:22.736463 containerd[1551]: time="2026-01-28T02:15:22.735917997Z" level=info msg="StopContainer for \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" returns successfully" Jan 28 02:15:22.740136 containerd[1551]: time="2026-01-28T02:15:22.739951909Z" level=info msg="StopPodSandbox for \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\"" Jan 28 02:15:22.740136 containerd[1551]: time="2026-01-28T02:15:22.740126794Z" level=info msg="Container to stop \"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 02:15:22.740397 containerd[1551]: time="2026-01-28T02:15:22.740144808Z" level=info msg="Container to stop \"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 02:15:22.740397 containerd[1551]: time="2026-01-28T02:15:22.740160487Z" level=info msg="Container to stop \"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 02:15:22.740397 containerd[1551]: time="2026-01-28T02:15:22.740175765Z" level=info msg="Container to stop \"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 02:15:22.740397 containerd[1551]: time="2026-01-28T02:15:22.740185974Z" level=info msg="Container to stop \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 02:15:22.767134 systemd[1]: cri-containerd-c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3.scope: Deactivated successfully. Jan 28 02:15:22.777366 containerd[1551]: time="2026-01-28T02:15:22.776751818Z" level=info msg="received sandbox exit event container_id:\"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" id:\"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" exit_status:137 exited_at:{seconds:1769566522 nanos:774880495}" monitor_name=podsandbox Jan 28 02:15:22.804068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e-rootfs.mount: Deactivated successfully. Jan 28 02:15:22.819037 containerd[1551]: time="2026-01-28T02:15:22.818389775Z" level=info msg="shim disconnected" id=5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e namespace=k8s.io Jan 28 02:15:22.819037 containerd[1551]: time="2026-01-28T02:15:22.818418569Z" level=warning msg="cleaning up after shim disconnected" id=5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e namespace=k8s.io Jan 28 02:15:22.834390 containerd[1551]: time="2026-01-28T02:15:22.818427566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:15:22.877844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3-rootfs.mount: Deactivated successfully. Jan 28 02:15:22.900408 containerd[1551]: time="2026-01-28T02:15:22.899923981Z" level=info msg="shim disconnected" id=c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3 namespace=k8s.io Jan 28 02:15:22.900408 containerd[1551]: time="2026-01-28T02:15:22.899956892Z" level=warning msg="cleaning up after shim disconnected" id=c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3 namespace=k8s.io Jan 28 02:15:22.900408 containerd[1551]: time="2026-01-28T02:15:22.899965197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:15:22.912773 containerd[1551]: time="2026-01-28T02:15:22.911384406Z" level=info msg="TearDown network for sandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" successfully" Jan 28 02:15:22.912773 containerd[1551]: time="2026-01-28T02:15:22.911419963Z" level=info msg="StopPodSandbox for \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" returns successfully" Jan 28 02:15:22.918066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e-shm.mount: Deactivated successfully. Jan 28 02:15:22.963880 containerd[1551]: time="2026-01-28T02:15:22.960478939Z" level=info msg="received sandbox container exit event sandbox_id:\"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" exit_status:137 exited_at:{seconds:1769566522 nanos:634674229}" monitor_name=criService Jan 28 02:15:23.036387 containerd[1551]: time="2026-01-28T02:15:23.036082156Z" level=info msg="received sandbox container exit event sandbox_id:\"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" exit_status:137 exited_at:{seconds:1769566522 nanos:774880495}" monitor_name=criService Jan 28 02:15:23.037988 containerd[1551]: time="2026-01-28T02:15:23.037718898Z" level=info msg="TearDown network for sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" successfully" Jan 28 02:15:23.037988 containerd[1551]: time="2026-01-28T02:15:23.037869998Z" level=info msg="StopPodSandbox for \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" returns successfully" Jan 28 02:15:23.054391 kubelet[2766]: I0128 02:15:23.053042 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6dxh\" (UniqueName: \"kubernetes.io/projected/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-kube-api-access-n6dxh\") pod \"072dec10-02b3-4f7a-b4fa-aabda3ec5bf6\" (UID: \"072dec10-02b3-4f7a-b4fa-aabda3ec5bf6\") " Jan 28 02:15:23.054391 kubelet[2766]: I0128 02:15:23.053092 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-cilium-config-path\") pod \"072dec10-02b3-4f7a-b4fa-aabda3ec5bf6\" (UID: \"072dec10-02b3-4f7a-b4fa-aabda3ec5bf6\") " Jan 28 02:15:23.067448 kubelet[2766]: I0128 02:15:23.067198 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-kube-api-access-n6dxh" (OuterVolumeSpecName: "kube-api-access-n6dxh") pod "072dec10-02b3-4f7a-b4fa-aabda3ec5bf6" (UID: "072dec10-02b3-4f7a-b4fa-aabda3ec5bf6"). InnerVolumeSpecName "kube-api-access-n6dxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 02:15:23.070651 kubelet[2766]: I0128 02:15:23.069138 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "072dec10-02b3-4f7a-b4fa-aabda3ec5bf6" (UID: "072dec10-02b3-4f7a-b4fa-aabda3ec5bf6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 02:15:23.156137 kubelet[2766]: I0128 02:15:23.155066 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-cgroup\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.156137 kubelet[2766]: I0128 02:15:23.155247 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.156137 kubelet[2766]: I0128 02:15:23.155392 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hubble-tls\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.156137 kubelet[2766]: I0128 02:15:23.155420 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-run\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.156137 kubelet[2766]: I0128 02:15:23.155439 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hostproc\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.156137 kubelet[2766]: I0128 02:15:23.155460 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-bpf-maps\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.157909 kubelet[2766]: I0128 02:15:23.155482 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-xtables-lock\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.157909 kubelet[2766]: I0128 02:15:23.156370 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-etc-cni-netd\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.157909 kubelet[2766]: I0128 02:15:23.156410 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-net\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.157909 kubelet[2766]: I0128 02:15:23.156425 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cni-path\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.157909 kubelet[2766]: I0128 02:15:23.156444 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-config-path\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.157909 kubelet[2766]: I0128 02:15:23.156459 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqvrc\" (UniqueName: \"kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-kube-api-access-dqvrc\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.158113 kubelet[2766]: I0128 02:15:23.156476 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-lib-modules\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.158113 kubelet[2766]: I0128 02:15:23.156691 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-clustermesh-secrets\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.158113 kubelet[2766]: I0128 02:15:23.156709 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-kernel\") pod \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\" (UID: \"59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1\") " Jan 28 02:15:23.158113 kubelet[2766]: I0128 02:15:23.156741 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.158113 kubelet[2766]: I0128 02:15:23.156751 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n6dxh\" (UniqueName: \"kubernetes.io/projected/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-kube-api-access-n6dxh\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.158113 kubelet[2766]: I0128 02:15:23.156760 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.158424 kubelet[2766]: I0128 02:15:23.156107 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158424 kubelet[2766]: I0128 02:15:23.156150 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158424 kubelet[2766]: I0128 02:15:23.156169 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hostproc" (OuterVolumeSpecName: "hostproc") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158424 kubelet[2766]: I0128 02:15:23.156184 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158424 kubelet[2766]: I0128 02:15:23.156781 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158807 kubelet[2766]: I0128 02:15:23.156818 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158807 kubelet[2766]: I0128 02:15:23.156831 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.158807 kubelet[2766]: I0128 02:15:23.156844 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cni-path" (OuterVolumeSpecName: "cni-path") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.159138 kubelet[2766]: I0128 02:15:23.158931 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 02:15:23.162742 kubelet[2766]: I0128 02:15:23.161483 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 02:15:23.173047 kubelet[2766]: I0128 02:15:23.172466 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 02:15:23.176474 kubelet[2766]: I0128 02:15:23.176198 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 02:15:23.178410 kubelet[2766]: I0128 02:15:23.178052 2766 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-kube-api-access-dqvrc" (OuterVolumeSpecName: "kube-api-access-dqvrc") pod "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" (UID: "59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1"). InnerVolumeSpecName "kube-api-access-dqvrc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 02:15:23.257667 kubelet[2766]: I0128 02:15:23.257402 2766 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258137 kubelet[2766]: I0128 02:15:23.257986 2766 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258137 kubelet[2766]: I0128 02:15:23.258119 2766 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258137 kubelet[2766]: I0128 02:15:23.258134 2766 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258145 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258159 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dqvrc\" (UniqueName: \"kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-kube-api-access-dqvrc\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258169 2766 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258179 2766 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258193 2766 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258204 2766 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258214 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.258887 kubelet[2766]: I0128 02:15:23.258230 2766 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.259157 kubelet[2766]: I0128 02:15:23.258240 2766 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 28 02:15:23.425067 systemd[1]: var-lib-kubelet-pods-072dec10\x2d02b3\x2d4f7a\x2db4fa\x2daabda3ec5bf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn6dxh.mount: Deactivated successfully. Jan 28 02:15:23.425231 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3-shm.mount: Deactivated successfully. Jan 28 02:15:23.425463 systemd[1]: var-lib-kubelet-pods-59f2b9f5\x2db2f7\x2d45c3\x2d8a8d\x2deda832ce45e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddqvrc.mount: Deactivated successfully. Jan 28 02:15:23.427228 systemd[1]: var-lib-kubelet-pods-59f2b9f5\x2db2f7\x2d45c3\x2d8a8d\x2deda832ce45e1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 02:15:23.427438 systemd[1]: var-lib-kubelet-pods-59f2b9f5\x2db2f7\x2d45c3\x2d8a8d\x2deda832ce45e1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 02:15:23.604689 systemd[1]: Removed slice kubepods-besteffort-pod072dec10_02b3_4f7a_b4fa_aabda3ec5bf6.slice - libcontainer container kubepods-besteffort-pod072dec10_02b3_4f7a_b4fa_aabda3ec5bf6.slice. Jan 28 02:15:23.604840 systemd[1]: kubepods-besteffort-pod072dec10_02b3_4f7a_b4fa_aabda3ec5bf6.slice: Consumed 2.626s CPU time, 28M memory peak, 4K written to disk. Jan 28 02:15:23.611450 systemd[1]: Removed slice kubepods-burstable-pod59f2b9f5_b2f7_45c3_8a8d_eda832ce45e1.slice - libcontainer container kubepods-burstable-pod59f2b9f5_b2f7_45c3_8a8d_eda832ce45e1.slice. Jan 28 02:15:23.612112 systemd[1]: kubepods-burstable-pod59f2b9f5_b2f7_45c3_8a8d_eda832ce45e1.slice: Consumed 23.038s CPU time, 127.7M memory peak, 268K read from disk, 13.3M written to disk. Jan 28 02:15:23.792155 kubelet[2766]: I0128 02:15:23.792095 2766 scope.go:117] "RemoveContainer" containerID="f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978" Jan 28 02:15:23.806849 containerd[1551]: time="2026-01-28T02:15:23.804910318Z" level=info msg="RemoveContainer for \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\"" Jan 28 02:15:23.831442 containerd[1551]: time="2026-01-28T02:15:23.830816780Z" level=info msg="RemoveContainer for \"f2627b385d297684d4dd6400353007fa0956e3ecdf7db1b4a6afea7470b2c978\" returns successfully" Jan 28 02:15:23.836217 kubelet[2766]: I0128 02:15:23.833845 2766 scope.go:117] "RemoveContainer" containerID="6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc" Jan 28 02:15:23.848772 containerd[1551]: time="2026-01-28T02:15:23.847388139Z" level=info msg="RemoveContainer for \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\"" Jan 28 02:15:23.875042 containerd[1551]: time="2026-01-28T02:15:23.874149171Z" level=info msg="RemoveContainer for \"6dbdba287b2c28a2765b8df878a3cb2e30da726d94551db6ae8c05003ed396fc\" returns successfully" Jan 28 02:15:23.876936 kubelet[2766]: I0128 02:15:23.876489 2766 scope.go:117] "RemoveContainer" containerID="ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe" Jan 28 02:15:23.890482 containerd[1551]: time="2026-01-28T02:15:23.890432625Z" level=info msg="RemoveContainer for \"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\"" Jan 28 02:15:23.914134 containerd[1551]: time="2026-01-28T02:15:23.913980438Z" level=info msg="RemoveContainer for \"ac7873963f7680968a34cf409c8deb896ba72741172d5066c4eb5121261694fe\" returns successfully" Jan 28 02:15:23.916465 kubelet[2766]: I0128 02:15:23.916157 2766 scope.go:117] "RemoveContainer" containerID="591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481" Jan 28 02:15:23.920240 sshd[4637]: Connection closed by 10.0.0.1 port 58536 Jan 28 02:15:23.923049 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:23.935439 containerd[1551]: time="2026-01-28T02:15:23.934079006Z" level=info msg="RemoveContainer for \"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\"" Jan 28 02:15:23.938028 systemd[1]: sshd@35-10.0.0.150:22-10.0.0.1:58536.service: Deactivated successfully. Jan 28 02:15:23.945801 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 02:15:23.948374 containerd[1551]: time="2026-01-28T02:15:23.947906388Z" level=info msg="RemoveContainer for \"591f4cbeeacdc487c34eff080e21dc388bb26b6c7bacefafffa7a3caaaed9481\" returns successfully" Jan 28 02:15:23.946428 systemd[1]: session-36.scope: Consumed 1.329s CPU time, 26M memory peak. Jan 28 02:15:23.948772 kubelet[2766]: I0128 02:15:23.948371 2766 scope.go:117] "RemoveContainer" containerID="841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7" Jan 28 02:15:23.952187 systemd-logind[1534]: Session 36 logged out. Waiting for processes to exit. Jan 28 02:15:23.954169 containerd[1551]: time="2026-01-28T02:15:23.952871151Z" level=info msg="RemoveContainer for \"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\"" Jan 28 02:15:23.959248 systemd[1]: Started sshd@36-10.0.0.150:22-10.0.0.1:58552.service - OpenSSH per-connection server daemon (10.0.0.1:58552). Jan 28 02:15:23.970693 containerd[1551]: time="2026-01-28T02:15:23.969013312Z" level=info msg="RemoveContainer for \"841b869fa1c895ac66c97235b8f58371a4b660d6701df82f5b7483a4719f16a7\" returns successfully" Jan 28 02:15:23.971612 kubelet[2766]: I0128 02:15:23.970856 2766 scope.go:117] "RemoveContainer" containerID="4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b" Jan 28 02:15:23.980921 systemd-logind[1534]: Removed session 36. Jan 28 02:15:23.985614 containerd[1551]: time="2026-01-28T02:15:23.984965859Z" level=info msg="RemoveContainer for \"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\"" Jan 28 02:15:24.023131 containerd[1551]: time="2026-01-28T02:15:24.020485521Z" level=info msg="RemoveContainer for \"4aaf662497ecb334b85d0e464349b55682d83fdbfc19bfa74bb812727130958b\" returns successfully" Jan 28 02:15:24.097397 sshd[4786]: Accepted publickey for core from 10.0.0.1 port 58552 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:24.104224 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:24.134027 systemd-logind[1534]: New session 37 of user core. Jan 28 02:15:24.151166 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 02:15:24.546473 kubelet[2766]: E0128 02:15:24.545906 2766 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 02:15:25.252473 sshd[4789]: Connection closed by 10.0.0.1 port 58552 Jan 28 02:15:25.253819 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:25.267363 systemd[1]: sshd@36-10.0.0.150:22-10.0.0.1:58552.service: Deactivated successfully. Jan 28 02:15:25.275228 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 02:15:25.280932 systemd-logind[1534]: Session 37 logged out. Waiting for processes to exit. Jan 28 02:15:25.287843 systemd[1]: Started sshd@37-10.0.0.150:22-10.0.0.1:45716.service - OpenSSH per-connection server daemon (10.0.0.1:45716). Jan 28 02:15:25.298249 systemd-logind[1534]: Removed session 37. Jan 28 02:15:25.335911 kubelet[2766]: I0128 02:15:25.332477 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ecb0d34d-0e84-4d59-a053-9fe31327952c-cilium-ipsec-secrets\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.339849 kubelet[2766]: I0128 02:15:25.336393 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-cilium-run\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.339849 kubelet[2766]: I0128 02:15:25.336776 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-cilium-cgroup\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.339849 kubelet[2766]: I0128 02:15:25.336798 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-etc-cni-netd\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.339849 kubelet[2766]: I0128 02:15:25.336817 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-lib-modules\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.339849 kubelet[2766]: I0128 02:15:25.336836 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecb0d34d-0e84-4d59-a053-9fe31327952c-clustermesh-secrets\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.339849 kubelet[2766]: I0128 02:15:25.336856 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-cni-path\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340085 kubelet[2766]: I0128 02:15:25.336876 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-xtables-lock\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340085 kubelet[2766]: I0128 02:15:25.336895 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecb0d34d-0e84-4d59-a053-9fe31327952c-cilium-config-path\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340085 kubelet[2766]: I0128 02:15:25.336915 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-host-proc-sys-net\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340085 kubelet[2766]: I0128 02:15:25.336936 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-host-proc-sys-kernel\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340085 kubelet[2766]: I0128 02:15:25.336959 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-bpf-maps\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340085 kubelet[2766]: I0128 02:15:25.336978 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecb0d34d-0e84-4d59-a053-9fe31327952c-hostproc\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340390 kubelet[2766]: I0128 02:15:25.337000 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecb0d34d-0e84-4d59-a053-9fe31327952c-hubble-tls\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.340390 kubelet[2766]: I0128 02:15:25.337019 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgmn\" (UniqueName: \"kubernetes.io/projected/ecb0d34d-0e84-4d59-a053-9fe31327952c-kube-api-access-gwgmn\") pod \"cilium-cd8t5\" (UID: \"ecb0d34d-0e84-4d59-a053-9fe31327952c\") " pod="kube-system/cilium-cd8t5" Jan 28 02:15:25.376401 systemd[1]: Created slice kubepods-burstable-podecb0d34d_0e84_4d59_a053_9fe31327952c.slice - libcontainer container kubepods-burstable-podecb0d34d_0e84_4d59_a053_9fe31327952c.slice. Jan 28 02:15:25.487711 sshd[4801]: Accepted publickey for core from 10.0.0.1 port 45716 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:25.490232 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:25.529958 systemd-logind[1534]: New session 38 of user core. Jan 28 02:15:25.544942 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 02:15:25.587605 kubelet[2766]: I0128 02:15:25.587055 2766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="072dec10-02b3-4f7a-b4fa-aabda3ec5bf6" path="/var/lib/kubelet/pods/072dec10-02b3-4f7a-b4fa-aabda3ec5bf6/volumes" Jan 28 02:15:25.588426 kubelet[2766]: I0128 02:15:25.588010 2766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1" path="/var/lib/kubelet/pods/59f2b9f5-b2f7-45c3-8a8d-eda832ce45e1/volumes" Jan 28 02:15:25.625420 sshd[4808]: Connection closed by 10.0.0.1 port 45716 Jan 28 02:15:25.626688 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:25.640722 systemd[1]: sshd@37-10.0.0.150:22-10.0.0.1:45716.service: Deactivated successfully. Jan 28 02:15:25.645012 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 02:15:25.650002 systemd-logind[1534]: Session 38 logged out. Waiting for processes to exit. Jan 28 02:15:25.653893 systemd[1]: Started sshd@38-10.0.0.150:22-10.0.0.1:45732.service - OpenSSH per-connection server daemon (10.0.0.1:45732). Jan 28 02:15:25.666394 systemd-logind[1534]: Removed session 38. Jan 28 02:15:25.706933 kubelet[2766]: E0128 02:15:25.702751 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:25.711163 containerd[1551]: time="2026-01-28T02:15:25.710486929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd8t5,Uid:ecb0d34d-0e84-4d59-a053-9fe31327952c,Namespace:kube-system,Attempt:0,}" Jan 28 02:15:25.774940 containerd[1551]: time="2026-01-28T02:15:25.774395903Z" level=info msg="connecting to shim 49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9" address="unix:///run/containerd/s/c228de4251c57a0ad1da86c549734cb96b2cf52e27d7e81a39ea0fcaa586d73f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:15:25.775068 sshd[4815]: Accepted publickey for core from 10.0.0.1 port 45732 ssh2: RSA SHA256:Ca8qQ/IlE0Cvn5rQcZbmJNUuJb/6jOSXM/oXKT/rNGg Jan 28 02:15:25.777882 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:15:25.790927 systemd-logind[1534]: New session 39 of user core. Jan 28 02:15:25.802881 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 02:15:25.864017 systemd[1]: Started cri-containerd-49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9.scope - libcontainer container 49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9. Jan 28 02:15:25.980472 containerd[1551]: time="2026-01-28T02:15:25.979836462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd8t5,Uid:ecb0d34d-0e84-4d59-a053-9fe31327952c,Namespace:kube-system,Attempt:0,} returns sandbox id \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\"" Jan 28 02:15:25.990604 kubelet[2766]: E0128 02:15:25.990183 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:26.014039 containerd[1551]: time="2026-01-28T02:15:26.013126681Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 02:15:26.051813 containerd[1551]: time="2026-01-28T02:15:26.050933912Z" level=info msg="Container a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:15:26.072917 containerd[1551]: time="2026-01-28T02:15:26.072845358Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754\"" Jan 28 02:15:26.080900 containerd[1551]: time="2026-01-28T02:15:26.079005485Z" level=info msg="StartContainer for \"a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754\"" Jan 28 02:15:26.087230 containerd[1551]: time="2026-01-28T02:15:26.086480670Z" level=info msg="connecting to shim a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754" address="unix:///run/containerd/s/c228de4251c57a0ad1da86c549734cb96b2cf52e27d7e81a39ea0fcaa586d73f" protocol=ttrpc version=3 Jan 28 02:15:26.203424 systemd[1]: Started cri-containerd-a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754.scope - libcontainer container a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754. Jan 28 02:15:26.394059 containerd[1551]: time="2026-01-28T02:15:26.393147207Z" level=info msg="StartContainer for \"a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754\" returns successfully" Jan 28 02:15:26.444911 systemd[1]: cri-containerd-a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754.scope: Deactivated successfully. Jan 28 02:15:26.463169 containerd[1551]: time="2026-01-28T02:15:26.462816916Z" level=info msg="received container exit event container_id:\"a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754\" id:\"a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754\" pid:4883 exited_at:{seconds:1769566526 nanos:458452753}" Jan 28 02:15:26.595793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46ac205faa17a12bf012293ec4ca821705edf9372c5385eced97bd7d3bae754-rootfs.mount: Deactivated successfully. Jan 28 02:15:26.853064 containerd[1551]: time="2026-01-28T02:15:26.850886965Z" level=warning msg="container event discarded" container=b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26 type=CONTAINER_CREATED_EVENT Jan 28 02:15:26.853064 containerd[1551]: time="2026-01-28T02:15:26.851926977Z" level=warning msg="container event discarded" container=b5e4f407bf6c89f26250182ceb6781553bd760c7c3def8ba0ff7fdc4661e9f26 type=CONTAINER_STARTED_EVENT Jan 28 02:15:26.872810 kubelet[2766]: E0128 02:15:26.867812 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:26.874861 containerd[1551]: time="2026-01-28T02:15:26.874454194Z" level=warning msg="container event discarded" container=e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30 type=CONTAINER_CREATED_EVENT Jan 28 02:15:26.879883 containerd[1551]: time="2026-01-28T02:15:26.879187465Z" level=warning msg="container event discarded" container=e3873e9d4a38694d1449d3ed89c0003e32aa97e50f9e00b2855af002e0302c30 type=CONTAINER_STARTED_EVENT Jan 28 02:15:26.891706 containerd[1551]: time="2026-01-28T02:15:26.890820861Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 02:15:26.930171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1186249275.mount: Deactivated successfully. Jan 28 02:15:26.939449 containerd[1551]: time="2026-01-28T02:15:26.939089854Z" level=info msg="Container 2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:15:26.961425 containerd[1551]: time="2026-01-28T02:15:26.961088659Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf\"" Jan 28 02:15:26.965145 containerd[1551]: time="2026-01-28T02:15:26.965101201Z" level=info msg="StartContainer for \"2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf\"" Jan 28 02:15:26.970930 containerd[1551]: time="2026-01-28T02:15:26.970464583Z" level=info msg="connecting to shim 2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf" address="unix:///run/containerd/s/c228de4251c57a0ad1da86c549734cb96b2cf52e27d7e81a39ea0fcaa586d73f" protocol=ttrpc version=3 Jan 28 02:15:26.996904 containerd[1551]: time="2026-01-28T02:15:26.996839334Z" level=warning msg="container event discarded" container=1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c type=CONTAINER_CREATED_EVENT Jan 28 02:15:27.041975 systemd[1]: Started cri-containerd-2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf.scope - libcontainer container 2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf. Jan 28 02:15:27.133066 containerd[1551]: time="2026-01-28T02:15:27.130177754Z" level=warning msg="container event discarded" container=ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b type=CONTAINER_CREATED_EVENT Jan 28 02:15:27.133066 containerd[1551]: time="2026-01-28T02:15:27.131815075Z" level=warning msg="container event discarded" container=ddb2bab318bb342a5fe724155d8424fb698b815bddc040dbb9f9a0164569844b type=CONTAINER_STARTED_EVENT Jan 28 02:15:27.145220 containerd[1551]: time="2026-01-28T02:15:27.145028245Z" level=warning msg="container event discarded" container=ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402 type=CONTAINER_CREATED_EVENT Jan 28 02:15:27.174910 containerd[1551]: time="2026-01-28T02:15:27.174750401Z" level=info msg="StartContainer for \"2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf\" returns successfully" Jan 28 02:15:27.209025 systemd[1]: cri-containerd-2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf.scope: Deactivated successfully. Jan 28 02:15:27.215962 containerd[1551]: time="2026-01-28T02:15:27.213894957Z" level=info msg="received container exit event container_id:\"2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf\" id:\"2766161cf8970667131af417eba82fd4d4d26a9e048307eeb30c402c49a013bf\" pid:4931 exited_at:{seconds:1769566527 nanos:213010891}" Jan 28 02:15:27.432981 containerd[1551]: time="2026-01-28T02:15:27.431900748Z" level=warning msg="container event discarded" container=cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295 type=CONTAINER_CREATED_EVENT Jan 28 02:15:27.878446 containerd[1551]: time="2026-01-28T02:15:27.878400802Z" level=warning msg="container event discarded" container=1ef55bbb030c21d6172f23430e0c1de999981ad64978b5ab49349608ceb22b0c type=CONTAINER_STARTED_EVENT Jan 28 02:15:27.878446 containerd[1551]: time="2026-01-28T02:15:27.878431289Z" level=warning msg="container event discarded" container=cad0c1ab9a91896fbe7748ea3ed4b1d08b66813b785d0ba63d1c7e1e08bfd295 type=CONTAINER_STARTED_EVENT Jan 28 02:15:27.881890 kubelet[2766]: E0128 02:15:27.880042 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:27.904154 containerd[1551]: time="2026-01-28T02:15:27.903430920Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 02:15:27.935474 containerd[1551]: time="2026-01-28T02:15:27.934883220Z" level=warning msg="container event discarded" container=ccfe4c2dc706f9014d3338a38d7a485d60f609cc78175c0601b7b171b52cb402 type=CONTAINER_STARTED_EVENT Jan 28 02:15:27.977063 containerd[1551]: time="2026-01-28T02:15:27.976898167Z" level=info msg="Container 20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:15:28.002245 containerd[1551]: time="2026-01-28T02:15:28.001813718Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c\"" Jan 28 02:15:28.006911 containerd[1551]: time="2026-01-28T02:15:28.005802399Z" level=info msg="StartContainer for \"20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c\"" Jan 28 02:15:28.010873 containerd[1551]: time="2026-01-28T02:15:28.010171665Z" level=info msg="connecting to shim 20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c" address="unix:///run/containerd/s/c228de4251c57a0ad1da86c549734cb96b2cf52e27d7e81a39ea0fcaa586d73f" protocol=ttrpc version=3 Jan 28 02:15:28.134831 systemd[1]: Started cri-containerd-20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c.scope - libcontainer container 20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c. Jan 28 02:15:28.393186 containerd[1551]: time="2026-01-28T02:15:28.393006903Z" level=info msg="StartContainer for \"20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c\" returns successfully" Jan 28 02:15:28.407904 systemd[1]: cri-containerd-20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c.scope: Deactivated successfully. Jan 28 02:15:28.423828 containerd[1551]: time="2026-01-28T02:15:28.423771233Z" level=info msg="received container exit event container_id:\"20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c\" id:\"20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c\" pid:4976 exited_at:{seconds:1769566528 nanos:421380360}" Jan 28 02:15:28.554117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20508ee9e072f214857ca44385015a1bc0f91c43a9fa032c5d2c0cd3fac56e6c-rootfs.mount: Deactivated successfully. Jan 28 02:15:28.912100 kubelet[2766]: E0128 02:15:28.908755 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:28.937943 containerd[1551]: time="2026-01-28T02:15:28.937791374Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 02:15:28.996038 containerd[1551]: time="2026-01-28T02:15:28.995857350Z" level=info msg="Container e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:15:29.021047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168888027.mount: Deactivated successfully. Jan 28 02:15:29.043847 containerd[1551]: time="2026-01-28T02:15:29.043137772Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535\"" Jan 28 02:15:29.047764 containerd[1551]: time="2026-01-28T02:15:29.047108437Z" level=info msg="StartContainer for \"e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535\"" Jan 28 02:15:29.051152 containerd[1551]: time="2026-01-28T02:15:29.051035831Z" level=info msg="connecting to shim e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535" address="unix:///run/containerd/s/c228de4251c57a0ad1da86c549734cb96b2cf52e27d7e81a39ea0fcaa586d73f" protocol=ttrpc version=3 Jan 28 02:15:29.137050 systemd[1]: Started cri-containerd-e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535.scope - libcontainer container e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535. Jan 28 02:15:29.388200 systemd[1]: cri-containerd-e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535.scope: Deactivated successfully. Jan 28 02:15:29.396869 containerd[1551]: time="2026-01-28T02:15:29.396215997Z" level=info msg="received container exit event container_id:\"e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535\" id:\"e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535\" pid:5017 exited_at:{seconds:1769566529 nanos:394759815}" Jan 28 02:15:29.402106 containerd[1551]: time="2026-01-28T02:15:29.401105179Z" level=info msg="StartContainer for \"e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535\" returns successfully" Jan 28 02:15:29.549396 kubelet[2766]: E0128 02:15:29.549203 2766 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 02:15:29.557806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92be9ded0b8c5caad3bc5fbce27f50ff1aaec5c90fa899c6a7d3845934d7535-rootfs.mount: Deactivated successfully. Jan 28 02:15:29.933749 kubelet[2766]: E0128 02:15:29.932901 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:29.952723 containerd[1551]: time="2026-01-28T02:15:29.952385568Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 02:15:30.013720 containerd[1551]: time="2026-01-28T02:15:30.012864818Z" level=info msg="Container aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:15:30.017012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034617680.mount: Deactivated successfully. Jan 28 02:15:30.042715 containerd[1551]: time="2026-01-28T02:15:30.042180081Z" level=info msg="CreateContainer within sandbox \"49649abff70b7cce9e8530869974f75361ae108f3c27d99751fdc2c38c8ca0d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85\"" Jan 28 02:15:30.045784 containerd[1551]: time="2026-01-28T02:15:30.044988346Z" level=info msg="StartContainer for \"aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85\"" Jan 28 02:15:30.052380 containerd[1551]: time="2026-01-28T02:15:30.051876531Z" level=info msg="connecting to shim aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85" address="unix:///run/containerd/s/c228de4251c57a0ad1da86c549734cb96b2cf52e27d7e81a39ea0fcaa586d73f" protocol=ttrpc version=3 Jan 28 02:15:30.133465 systemd[1]: Started cri-containerd-aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85.scope - libcontainer container aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85. Jan 28 02:15:30.349107 containerd[1551]: time="2026-01-28T02:15:30.348402693Z" level=info msg="StartContainer for \"aaad3132c3f9266372a925d00b89608d2c0495114a43dd62781ddf1185059f85\" returns successfully" Jan 28 02:15:30.592004 kubelet[2766]: I0128 02:15:30.591855 2766 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T02:15:30Z","lastTransitionTime":"2026-01-28T02:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 28 02:15:30.968654 kubelet[2766]: E0128 02:15:30.968005 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:31.029386 kubelet[2766]: I0128 02:15:31.027898 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cd8t5" podStartSLOduration=6.027883759 podStartE2EDuration="6.027883759s" podCreationTimestamp="2026-01-28 02:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:15:31.027409177 +0000 UTC m=+287.991746055" watchObservedRunningTime="2026-01-28 02:15:31.027883759 +0000 UTC m=+287.992220635" Jan 28 02:15:31.807844 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 28 02:15:31.971659 kubelet[2766]: E0128 02:15:31.971191 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:37.586728 kubelet[2766]: E0128 02:15:37.585080 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:39.264857 systemd-networkd[1469]: lxc_health: Link UP Jan 28 02:15:39.278763 systemd-networkd[1469]: lxc_health: Gained carrier Jan 28 02:15:39.705766 kubelet[2766]: E0128 02:15:39.703478 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:40.049237 kubelet[2766]: E0128 02:15:40.049208 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:40.600439 kubelet[2766]: E0128 02:15:40.595807 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:40.600439 kubelet[2766]: E0128 02:15:40.596943 2766 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46660->127.0.0.1:33687: write tcp 127.0.0.1:46660->127.0.0.1:33687: write: broken pipe Jan 28 02:15:41.067188 kubelet[2766]: E0128 02:15:41.067067 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:15:41.308458 systemd-networkd[1469]: lxc_health: Gained IPv6LL Jan 28 02:15:43.479064 containerd[1551]: time="2026-01-28T02:15:43.478715063Z" level=info msg="StopPodSandbox for \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\"" Jan 28 02:15:43.479064 containerd[1551]: time="2026-01-28T02:15:43.478872484Z" level=info msg="TearDown network for sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" successfully" Jan 28 02:15:43.479064 containerd[1551]: time="2026-01-28T02:15:43.478887182Z" level=info msg="StopPodSandbox for \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" returns successfully" Jan 28 02:15:43.482019 containerd[1551]: time="2026-01-28T02:15:43.481816903Z" level=info msg="RemovePodSandbox for \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\"" Jan 28 02:15:43.482019 containerd[1551]: time="2026-01-28T02:15:43.481855304Z" level=info msg="Forcibly stopping sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\"" Jan 28 02:15:43.482019 containerd[1551]: time="2026-01-28T02:15:43.481923511Z" level=info msg="TearDown network for sandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" successfully" Jan 28 02:15:43.486139 containerd[1551]: time="2026-01-28T02:15:43.485978152Z" level=info msg="Ensure that sandbox c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3 in task-service has been cleanup successfully" Jan 28 02:15:43.505842 containerd[1551]: time="2026-01-28T02:15:43.505799261Z" level=info msg="RemovePodSandbox \"c8248c0c7095813fb9f331bd8c6ea7af264411bced4eaca0ea96e0ecd69825b3\" returns successfully" Jan 28 02:15:43.509869 containerd[1551]: time="2026-01-28T02:15:43.509842552Z" level=info msg="StopPodSandbox for \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\"" Jan 28 02:15:43.510717 containerd[1551]: time="2026-01-28T02:15:43.510661963Z" level=info msg="TearDown network for sandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" successfully" Jan 28 02:15:43.510717 containerd[1551]: time="2026-01-28T02:15:43.510687300Z" level=info msg="StopPodSandbox for \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" returns successfully" Jan 28 02:15:43.513833 containerd[1551]: time="2026-01-28T02:15:43.512668199Z" level=info msg="RemovePodSandbox for \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\"" Jan 28 02:15:43.513833 containerd[1551]: time="2026-01-28T02:15:43.512692604Z" level=info msg="Forcibly stopping sandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\"" Jan 28 02:15:43.513833 containerd[1551]: time="2026-01-28T02:15:43.512739772Z" level=info msg="TearDown network for sandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" successfully" Jan 28 02:15:43.516439 containerd[1551]: time="2026-01-28T02:15:43.516411701Z" level=info msg="Ensure that sandbox 5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e in task-service has been cleanup successfully" Jan 28 02:15:43.529956 containerd[1551]: time="2026-01-28T02:15:43.528941049Z" level=info msg="RemovePodSandbox \"5e14ef2ff75a05160618486890af61d4f966d98c7397a91b5f4d222e4af0cb1e\" returns successfully" Jan 28 02:15:45.764808 sshd[4837]: Connection closed by 10.0.0.1 port 45732 Jan 28 02:15:45.767006 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Jan 28 02:15:45.774924 systemd[1]: sshd@38-10.0.0.150:22-10.0.0.1:45732.service: Deactivated successfully. Jan 28 02:15:45.784168 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 02:15:45.785467 systemd[1]: session-39.scope: Consumed 1.012s CPU time, 23.7M memory peak. Jan 28 02:15:45.789933 systemd-logind[1534]: Session 39 logged out. Waiting for processes to exit. Jan 28 02:15:45.796157 systemd-logind[1534]: Removed session 39.