Jan 23 18:56:31.527817 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:56:31.527902 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:31.527921 kernel: BIOS-provided physical RAM map: Jan 23 18:56:31.527931 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 18:56:31.527941 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 18:56:31.528042 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 18:56:31.528057 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 18:56:31.528067 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 18:56:31.528118 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 18:56:31.528129 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 18:56:31.528140 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:56:31.528154 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 18:56:31.528165 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:56:31.528175 kernel: NX (Execute Disable) protection: active Jan 23 18:56:31.528186 kernel: APIC: Static calls initialized Jan 23 18:56:31.528197 kernel: SMBIOS 2.8 present. Jan 23 18:56:31.528299 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 18:56:31.528312 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:56:31.528323 kernel: Hypervisor detected: KVM Jan 23 18:56:31.528332 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 18:56:31.528342 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:56:31.528354 kernel: kvm-clock: using sched offset of 26679539910 cycles Jan 23 18:56:31.528363 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:56:31.528375 kernel: tsc: Detected 2445.426 MHz processor Jan 23 18:56:31.528386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:56:31.528398 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:56:31.528415 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 18:56:31.528426 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 18:56:31.528436 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:56:31.528447 kernel: Using GB pages for direct mapping Jan 23 18:56:31.528458 kernel: ACPI: Early table checksum verification disabled Jan 23 18:56:31.528468 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 18:56:31.528479 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528490 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528501 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528516 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 18:56:31.528527 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528537 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528549 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528559 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:31.528576 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 18:56:31.528592 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 18:56:31.528602 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 18:56:31.528616 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 18:56:31.528625 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 18:56:31.528637 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 18:56:31.528648 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 18:56:31.528660 kernel: No NUMA configuration found Jan 23 18:56:31.528670 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 18:56:31.528687 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 18:56:31.528698 kernel: Zone ranges: Jan 23 18:56:31.528709 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:56:31.528722 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 18:56:31.528731 kernel: Normal empty Jan 23 18:56:31.528743 kernel: Device empty Jan 23 18:56:31.528753 kernel: Movable zone start for each node Jan 23 18:56:31.528765 kernel: Early memory node ranges Jan 23 18:56:31.528776 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 18:56:31.528788 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 18:56:31.528803 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 18:56:31.528815 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:56:31.528826 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 18:56:31.528879 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 18:56:31.528892 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:56:31.528903 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:56:31.528915 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:56:31.528926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:56:31.529035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:56:31.529055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:56:31.529067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:56:31.529078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:56:31.529090 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:56:31.529100 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:56:31.529113 kernel: TSC deadline timer available Jan 23 18:56:31.529122 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:56:31.529134 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:56:31.529145 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:56:31.529160 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:56:31.529172 kernel: CPU topo: Num. cores per package: 4 Jan 23 18:56:31.529183 kernel: CPU topo: Num. threads per package: 4 Jan 23 18:56:31.529194 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 18:56:31.529205 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:56:31.529217 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:56:31.529227 kernel: kvm-guest: setup PV sched yield Jan 23 18:56:31.529239 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 18:56:31.529249 kernel: Booting paravirtualized kernel on KVM Jan 23 18:56:31.530144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:56:31.530159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 18:56:31.530169 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 18:56:31.530181 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 18:56:31.530192 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 18:56:31.530203 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:56:31.530215 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:56:31.530228 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:31.530245 kernel: random: crng init done Jan 23 18:56:31.530307 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:56:31.530319 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:56:31.530330 kernel: Fallback order for Node 0: 0 Jan 23 18:56:31.530340 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 18:56:31.530353 kernel: Policy zone: DMA32 Jan 23 18:56:31.530363 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:56:31.530375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 18:56:31.530387 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:56:31.530403 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:56:31.530414 kernel: Dynamic Preempt: voluntary Jan 23 18:56:31.530426 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:56:31.530438 kernel: rcu: RCU event tracing is enabled. Jan 23 18:56:31.530451 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 18:56:31.530463 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:56:31.530516 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:56:31.530529 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:56:31.533148 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:56:31.533162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 18:56:31.533182 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:56:31.533192 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:56:31.533204 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:56:31.533215 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 18:56:31.533226 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:56:31.533249 kernel: Console: colour VGA+ 80x25 Jan 23 18:56:31.533323 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:56:31.533334 kernel: ACPI: Core revision 20240827 Jan 23 18:56:31.533345 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:56:31.533357 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:56:31.533368 kernel: x2apic enabled Jan 23 18:56:31.533383 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:56:31.533437 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:56:31.533450 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:56:31.533461 kernel: kvm-guest: setup PV IPIs Jan 23 18:56:31.533472 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:56:31.533488 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:56:31.533499 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 18:56:31.533510 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:56:31.533521 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:56:31.533532 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:56:31.533543 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:56:31.533555 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:56:31.533566 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:56:31.533575 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:56:31.533591 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:56:31.533604 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:56:31.533615 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:56:31.533627 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:56:31.533638 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:56:31.533649 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:56:31.533660 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:56:31.533672 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:56:31.533686 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:56:31.533697 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:56:31.533709 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 18:56:31.533721 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:56:31.533732 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:56:31.533742 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:56:31.533754 kernel: landlock: Up and running. Jan 23 18:56:31.533765 kernel: SELinux: Initializing. Jan 23 18:56:31.533776 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:56:31.533792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:56:31.533841 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:56:31.533854 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 18:56:31.533865 kernel: signal: max sigframe size: 1776 Jan 23 18:56:31.533876 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:56:31.533889 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:56:31.533901 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:56:31.533910 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:56:31.533922 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:56:31.533937 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:56:31.534098 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 18:56:31.534115 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 18:56:31.534126 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 18:56:31.534138 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Jan 23 18:56:31.534149 kernel: devtmpfs: initialized Jan 23 18:56:31.534161 kernel: x86/mm: Memory block size: 128MB Jan 23 18:56:31.534172 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:56:31.534183 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 18:56:31.534200 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:56:31.534211 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:56:31.534222 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:56:31.534235 kernel: audit: type=2000 audit(1769194584.217:1): state=initialized audit_enabled=0 res=1 Jan 23 18:56:31.534245 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:56:31.534306 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:56:31.534319 kernel: cpuidle: using governor menu Jan 23 18:56:31.534330 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:56:31.534342 kernel: dca service started, version 1.12.1 Jan 23 18:56:31.534358 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 18:56:31.534369 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 18:56:31.534381 kernel: PCI: Using configuration type 1 for base access Jan 23 18:56:31.534392 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:56:31.534403 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:56:31.534415 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:56:31.534426 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:56:31.534437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:56:31.534449 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:56:31.534463 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:56:31.534474 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:56:31.534485 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:56:31.534497 kernel: ACPI: Interpreter enabled Jan 23 18:56:31.534508 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:56:31.534519 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:56:31.534534 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:56:31.534545 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:56:31.534556 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:56:31.534571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:56:31.535403 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:56:31.535615 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:56:31.535814 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:56:31.535831 kernel: PCI host bridge to bus 0000:00 Jan 23 18:56:31.536547 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:56:31.536749 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:56:31.537039 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:56:31.537225 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 18:56:31.537515 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 18:56:31.537694 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 18:56:31.537870 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:56:31.538353 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:56:31.538663 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:56:31.538859 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 18:56:31.539175 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 18:56:31.539553 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 18:56:31.539750 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:56:31.539945 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 16601 usecs Jan 23 18:56:31.540417 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:56:31.540623 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 18:56:31.540820 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 18:56:31.541157 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 18:56:31.541659 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:56:31.541857 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 18:56:31.542141 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 18:56:31.542398 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 18:56:31.542743 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:56:31.542939 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 18:56:31.543242 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 18:56:31.543501 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 18:56:31.543693 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 18:56:31.544069 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:56:31.544344 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:56:31.544677 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:56:31.544920 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 18:56:31.545199 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 18:56:31.545547 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:56:31.545743 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 18:56:31.545760 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:56:31.545777 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:56:31.545789 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:56:31.545800 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:56:31.545811 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:56:31.545823 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:56:31.545834 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:56:31.545845 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:56:31.545856 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:56:31.545867 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:56:31.545882 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:56:31.545895 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:56:31.545905 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:56:31.545916 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:56:31.545927 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:56:31.545938 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:56:31.546032 kernel: iommu: Default domain type: Translated Jan 23 18:56:31.546045 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:56:31.546056 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:56:31.546072 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:56:31.546084 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 18:56:31.546096 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 18:56:31.546350 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:56:31.546546 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:56:31.547100 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:56:31.547121 kernel: vgaarb: loaded Jan 23 18:56:31.547133 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:56:31.547145 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:56:31.547165 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:56:31.547175 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:56:31.547186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:56:31.547198 kernel: pnp: PnP ACPI init Jan 23 18:56:31.547659 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 18:56:31.547679 kernel: pnp: PnP ACPI: found 6 devices Jan 23 18:56:31.547693 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:56:31.547703 kernel: NET: Registered PF_INET protocol family Jan 23 18:56:31.547720 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:56:31.547733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:56:31.547744 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:56:31.547756 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:56:31.547767 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:56:31.547779 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:56:31.547790 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:56:31.547800 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:56:31.547812 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:56:31.547828 kernel: NET: Registered PF_XDP protocol family Jan 23 18:56:31.548213 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:56:31.548459 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:56:31.548645 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:56:31.548825 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 18:56:31.549096 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 18:56:31.549340 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 18:56:31.549358 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:56:31.549376 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:56:31.549388 kernel: Initialise system trusted keyrings Jan 23 18:56:31.549399 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:56:31.549410 kernel: Key type asymmetric registered Jan 23 18:56:31.549421 kernel: Asymmetric key parser 'x509' registered Jan 23 18:56:31.549432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:56:31.549443 kernel: io scheduler mq-deadline registered Jan 23 18:56:31.549455 kernel: io scheduler kyber registered Jan 23 18:56:31.549466 kernel: io scheduler bfq registered Jan 23 18:56:31.549480 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:56:31.549494 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:56:31.549508 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:56:31.549520 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:56:31.549530 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:56:31.549540 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:56:31.549552 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:56:31.549564 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:56:31.549575 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:56:31.549944 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 18:56:31.550216 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 18:56:31.550233 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:56:31.550477 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T18:56:30 UTC (1769194590) Jan 23 18:56:31.550659 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:56:31.550675 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:56:31.550687 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:56:31.550700 kernel: Segment Routing with IPv6 Jan 23 18:56:31.550715 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:56:31.550727 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:56:31.550738 kernel: Key type dns_resolver registered Jan 23 18:56:31.550750 kernel: IPI shorthand broadcast: enabled Jan 23 18:56:31.550761 kernel: sched_clock: Marking stable (6046040927, 613246356)->(7153683019, -494395736) Jan 23 18:56:31.550772 kernel: registered taskstats version 1 Jan 23 18:56:31.550783 kernel: Loading compiled-in X.509 certificates Jan 23 18:56:31.550795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:56:31.550806 kernel: Demotion targets for Node 0: null Jan 23 18:56:31.550821 kernel: Key type .fscrypt registered Jan 23 18:56:31.550832 kernel: Key type fscrypt-provisioning registered Jan 23 18:56:31.550843 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:56:31.550855 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:56:31.550866 kernel: ima: No architecture policies found Jan 23 18:56:31.550878 kernel: clk: Disabling unused clocks Jan 23 18:56:31.550889 kernel: Warning: unable to open an initial console. Jan 23 18:56:31.550900 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:56:31.550912 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:56:31.550927 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:56:31.550938 kernel: Run /init as init process Jan 23 18:56:31.551032 kernel: with arguments: Jan 23 18:56:31.551046 kernel: /init Jan 23 18:56:31.551058 kernel: with environment: Jan 23 18:56:31.551069 kernel: HOME=/ Jan 23 18:56:31.551080 kernel: TERM=linux Jan 23 18:56:31.551092 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:56:31.551108 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:31.551126 systemd[1]: Detected virtualization kvm. Jan 23 18:56:31.551138 systemd[1]: Detected architecture x86-64. Jan 23 18:56:31.551149 systemd[1]: Running in initrd. Jan 23 18:56:31.551161 systemd[1]: No hostname configured, using default hostname. Jan 23 18:56:31.551173 systemd[1]: Hostname set to . Jan 23 18:56:31.551185 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:56:31.551197 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:56:31.551226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:31.551243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:31.551316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:56:31.551330 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:31.551343 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:56:31.551361 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:56:31.551375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:56:31.551387 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:56:31.551400 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:31.551412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:31.551424 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:56:31.551437 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:31.551450 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:31.551466 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:56:31.551478 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:31.551490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:31.551503 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:56:31.551515 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:56:31.551527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:31.551540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:31.551552 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:31.551571 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:56:31.551586 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:56:31.551599 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:31.551611 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:56:31.551625 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:56:31.551636 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:56:31.551649 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:31.551661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:31.551674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:31.551691 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:31.551789 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 18:56:31.551824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:31.551839 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:56:31.551852 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:56:31.551869 systemd-journald[203]: Journal started Jan 23 18:56:31.551894 systemd-journald[203]: Runtime Journal (/run/log/journal/0d00323c2f55489ab32ee00d02e7237b) is 6M, max 48.3M, 42.2M free. Jan 23 18:56:31.576337 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:31.559214 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 18:56:31.587806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:56:31.626150 systemd-tmpfiles[214]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:56:31.638186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:56:32.016513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:56:32.016565 kernel: Bridge firewalling registered Jan 23 18:56:31.668325 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 18:56:32.023386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:32.038563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:32.059105 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:32.111036 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:56:32.119639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:32.161192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:32.187593 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:32.221169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:32.227885 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:56:32.263742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:32.290711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:56:32.378344 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:32.389153 systemd-resolved[245]: Positive Trust Anchors: Jan 23 18:56:32.389172 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:56:32.389219 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:56:32.394607 systemd-resolved[245]: Defaulting to hostname 'linux'. Jan 23 18:56:32.406552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:56:32.440378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:32.900713 kernel: hrtimer: interrupt took 5416901 ns Jan 23 18:56:32.951472 kernel: SCSI subsystem initialized Jan 23 18:56:32.974684 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:56:33.027469 kernel: iscsi: registered transport (tcp) Jan 23 18:56:33.130218 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:56:33.130417 kernel: QLogic iSCSI HBA Driver Jan 23 18:56:33.414232 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:33.560502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:33.576646 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:36.642152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:36.761392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:56:37.175787 kernel: raid6: avx2x4 gen() 6978 MB/s Jan 23 18:56:37.207285 kernel: raid6: avx2x2 gen() 10516 MB/s Jan 23 18:56:37.237803 kernel: raid6: avx2x1 gen() 476 MB/s Jan 23 18:56:37.238080 kernel: raid6: using algorithm avx2x2 gen() 10516 MB/s Jan 23 18:56:37.266287 kernel: raid6: .... xor() 11879 MB/s, rmw enabled Jan 23 18:56:37.266620 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:56:37.437731 kernel: xor: automatically using best checksumming function avx Jan 23 18:56:39.174901 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:56:39.256595 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:39.297138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:39.493317 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 23 18:56:39.582874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:39.640639 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:56:39.876710 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Jan 23 18:56:40.246796 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:40.269832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:40.578678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:40.636815 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:56:40.772095 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 18:56:40.827331 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:56:40.827525 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 18:56:40.859771 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:56:40.859887 kernel: GPT:9289727 != 19775487 Jan 23 18:56:40.859901 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:56:40.866876 kernel: GPT:9289727 != 19775487 Jan 23 18:56:40.877230 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:56:40.877273 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:56:40.883687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:56:40.914466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:40.923174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:40.943593 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:40.957779 kernel: libata version 3.00 loaded. Jan 23 18:56:40.957638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:40.966415 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:40.999164 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:56:41.008188 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:56:41.008695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:56:41.047539 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:56:41.047849 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:56:41.050297 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:56:41.082746 kernel: AES CTR mode by8 optimization enabled Jan 23 18:56:41.082813 kernel: scsi host0: ahci Jan 23 18:56:41.079532 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:56:41.106040 kernel: scsi host1: ahci Jan 23 18:56:41.111097 kernel: scsi host2: ahci Jan 23 18:56:41.121063 kernel: scsi host3: ahci Jan 23 18:56:41.121487 kernel: scsi host4: ahci Jan 23 18:56:41.127278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:56:41.144124 kernel: scsi host5: ahci Jan 23 18:56:41.154190 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:56:41.224880 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 23 18:56:41.224919 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 23 18:56:41.225145 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 23 18:56:41.225167 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 23 18:56:41.225183 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 23 18:56:41.225198 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 23 18:56:41.185506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 18:56:41.687264 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:41.687301 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:56:41.687326 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:56:41.687414 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:56:41.687439 kernel: ata3.00: applying bridge limits Jan 23 18:56:41.687455 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:56:41.687470 kernel: ata3.00: configured for UDMA/100 Jan 23 18:56:41.687484 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:41.687499 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:41.687513 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:41.687528 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:41.687548 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:56:41.688062 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:56:41.688333 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:56:41.688426 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:56:41.189922 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:56:41.729319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:41.770415 disk-uuid[621]: Primary Header is updated. Jan 23 18:56:41.770415 disk-uuid[621]: Secondary Entries is updated. Jan 23 18:56:41.770415 disk-uuid[621]: Secondary Header is updated. Jan 23 18:56:41.816807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:56:42.207836 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:42.226677 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:42.236102 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:42.236316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:42.268860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:56:42.358344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:42.836218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:56:42.844754 disk-uuid[622]: The operation has completed successfully. Jan 23 18:56:42.935468 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:56:42.935745 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:56:43.048313 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:56:43.084329 sh[651]: Success Jan 23 18:56:43.170183 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:56:43.170286 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:56:43.184666 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:56:43.289604 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:56:43.467246 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:56:43.506615 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:56:43.572235 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:56:43.641255 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (663) Jan 23 18:56:43.659236 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:56:43.659520 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:43.721600 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:56:43.721700 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:56:43.730104 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:56:43.749874 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:43.767619 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:56:43.782650 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:56:43.832781 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:56:43.920760 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Jan 23 18:56:43.945827 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:43.945917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:43.982858 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:56:43.983086 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:56:44.024714 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:44.033924 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:56:44.043045 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:56:44.470746 ignition[745]: Ignition 2.22.0 Jan 23 18:56:44.470812 ignition[745]: Stage: fetch-offline Jan 23 18:56:44.470870 ignition[745]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:44.470884 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:44.471142 ignition[745]: parsed url from cmdline: "" Jan 23 18:56:44.471149 ignition[745]: no config URL provided Jan 23 18:56:44.471157 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:56:44.471171 ignition[745]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:56:44.471247 ignition[745]: op(1): [started] loading QEMU firmware config module Jan 23 18:56:44.471255 ignition[745]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 18:56:44.559664 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:44.583628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:56:44.581446 ignition[745]: op(1): [finished] loading QEMU firmware config module Jan 23 18:56:44.743644 systemd-networkd[840]: lo: Link UP Jan 23 18:56:44.743688 systemd-networkd[840]: lo: Gained carrier Jan 23 18:56:44.752774 systemd-networkd[840]: Enumeration completed Jan 23 18:56:44.753309 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:56:44.766582 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:44.766591 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:44.779773 systemd-networkd[840]: eth0: Link UP Jan 23 18:56:44.780156 systemd-networkd[840]: eth0: Gained carrier Jan 23 18:56:44.780180 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:44.821185 systemd[1]: Reached target network.target - Network. Jan 23 18:56:44.935843 systemd-networkd[840]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:56:45.266225 systemd-resolved[245]: Detected conflict on linux IN A 10.0.0.36 Jan 23 18:56:45.269181 systemd-resolved[245]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 23 18:56:45.328808 ignition[745]: parsing config with SHA512: 8af7ce921f9e9e3882bd2eb576239489036edfc3e05bb39ffed18913ec95b25ab1699ed3ac50b23f3801666396a5d1e1cf340f839d37ca42cb61a73c4511a724 Jan 23 18:56:45.427172 unknown[745]: fetched base config from "system" Jan 23 18:56:45.430139 unknown[745]: fetched user config from "qemu" Jan 23 18:56:45.430642 ignition[745]: fetch-offline: fetch-offline passed Jan 23 18:56:45.439106 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:45.430756 ignition[745]: Ignition finished successfully Jan 23 18:56:45.452906 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 18:56:45.455536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:56:45.630892 ignition[846]: Ignition 2.22.0 Jan 23 18:56:45.630943 ignition[846]: Stage: kargs Jan 23 18:56:45.631370 ignition[846]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:45.631458 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:45.668053 ignition[846]: kargs: kargs passed Jan 23 18:56:45.668186 ignition[846]: Ignition finished successfully Jan 23 18:56:45.685630 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:56:45.710239 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:56:45.909494 ignition[854]: Ignition 2.22.0 Jan 23 18:56:45.910828 ignition[854]: Stage: disks Jan 23 18:56:45.911126 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:45.911142 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:45.913721 ignition[854]: disks: disks passed Jan 23 18:56:45.913795 ignition[854]: Ignition finished successfully Jan 23 18:56:45.953875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:56:45.980269 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:45.983908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:56:46.026358 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:46.047474 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:56:46.067773 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:56:46.092852 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:56:46.218556 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 18:56:46.238488 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:56:46.276814 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:56:46.484570 systemd-networkd[840]: eth0: Gained IPv6LL Jan 23 18:56:46.950103 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:56:46.954767 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:56:46.960690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:46.980857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:46.982862 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:56:47.010226 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:56:47.010628 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:56:47.010676 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:47.046898 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:56:47.052611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:56:47.131626 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Jan 23 18:56:47.236341 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:56:47.252087 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:56:47.490682 initrd-setup-root[894]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:56:47.511233 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:47.511336 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:47.511358 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:56:47.525301 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:56:47.525333 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:56:47.528539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:47.829617 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:47.837945 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:56:47.851164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:56:47.883118 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:56:47.903788 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:47.930337 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:56:48.277477 ignition[985]: INFO : Ignition 2.22.0 Jan 23 18:56:48.277477 ignition[985]: INFO : Stage: mount Jan 23 18:56:48.277477 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:48.277477 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:48.316110 ignition[985]: INFO : mount: mount passed Jan 23 18:56:48.316110 ignition[985]: INFO : Ignition finished successfully Jan 23 18:56:48.301181 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:56:48.307580 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:56:48.435644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:48.514091 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Jan 23 18:56:48.514165 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:48.523890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:48.572792 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:56:48.572926 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:56:48.585806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:48.705802 ignition[1014]: INFO : Ignition 2.22.0 Jan 23 18:56:48.705802 ignition[1014]: INFO : Stage: files Jan 23 18:56:48.752749 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:48.752749 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:48.752749 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:56:48.811463 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:56:48.811463 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:56:48.811463 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:56:48.811463 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:56:48.811463 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:56:48.811463 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:48.811463 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:56:48.778841 unknown[1014]: wrote ssh authorized keys file for user: core Jan 23 18:56:48.908726 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:56:49.087422 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:49.087422 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 18:56:49.087422 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 18:56:49.260281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 18:56:49.585108 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 18:56:49.585108 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:49.608550 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:56:49.856107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 18:56:50.728203 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:56:50.728203 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 18:56:50.756577 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 18:56:50.777449 ignition[1014]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:50.899897 ignition[1014]: INFO : files: files passed Jan 23 18:56:50.899897 ignition[1014]: INFO : Ignition finished successfully Jan 23 18:56:50.972203 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:56:51.004731 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:56:51.023589 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:56:51.050184 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:56:51.050501 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:56:51.079940 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 18:56:51.104358 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:51.104358 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:51.148562 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:51.168915 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:51.184458 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:56:51.198881 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:56:51.336374 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:56:51.336707 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:56:51.345232 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:56:51.350794 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:56:51.372517 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:56:51.386210 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:56:51.461843 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:51.474821 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:56:51.549876 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:51.575287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:51.589708 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:56:51.604240 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:56:51.604524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:51.609354 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:56:51.618643 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:56:51.636718 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:56:51.654857 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:51.698902 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:51.721280 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:51.743106 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:56:51.769633 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:51.770578 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:56:51.797659 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:56:51.849900 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:56:51.861156 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:56:51.861574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:51.877115 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:51.886835 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:51.900342 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:56:51.903787 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:51.926596 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:56:51.926794 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:51.951140 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:56:51.951282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:51.960217 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:56:51.983131 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:56:52.005460 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:52.019694 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:56:52.036364 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:56:52.052827 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:56:52.053087 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:52.060909 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:56:52.061181 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:52.084926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:56:52.085308 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:52.112491 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:56:52.112753 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:56:52.140156 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:56:52.159859 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:56:52.160298 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:52.237862 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:56:52.248598 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:56:52.249809 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:52.278866 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:56:52.279223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:52.380938 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:56:52.463238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:56:52.511179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:56:52.527470 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:56:52.527755 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:56:52.590339 ignition[1069]: INFO : Ignition 2.22.0 Jan 23 18:56:52.590339 ignition[1069]: INFO : Stage: umount Jan 23 18:56:52.609584 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:52.609584 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:52.609584 ignition[1069]: INFO : umount: umount passed Jan 23 18:56:52.609584 ignition[1069]: INFO : Ignition finished successfully Jan 23 18:56:52.622394 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:56:52.623518 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:56:52.648883 systemd[1]: Stopped target network.target - Network. Jan 23 18:56:52.657547 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:56:52.657684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:56:52.674205 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:56:52.674300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:56:52.710765 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:56:52.710863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:56:52.720853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:56:52.721187 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:52.739360 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:56:52.739561 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:52.766502 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:56:52.826797 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:56:52.934335 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:56:52.936915 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:56:52.961333 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:56:52.961839 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:56:52.962160 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:56:53.000850 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:56:53.006548 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:56:53.021936 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:56:53.022206 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:53.047264 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:56:53.047555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:56:53.047644 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:53.075885 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:56:53.076335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:53.114745 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:56:53.114914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:53.130909 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:56:53.131374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:53.180208 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:53.198916 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:56:53.202693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:53.240351 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:56:53.243489 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:53.261751 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:56:53.262617 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:56:53.307476 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:56:53.307638 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:53.320813 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:56:53.320873 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:53.340100 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:56:53.340234 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:53.360359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:56:53.360753 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:53.374548 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:56:53.374672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:53.395735 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:56:53.421803 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:56:53.422043 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:53.448934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:56:53.449248 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:53.482905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:53.483142 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:53.507138 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:56:53.507258 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:56:53.507361 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:53.516849 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:56:53.517415 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:56:53.519546 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:56:53.533169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:56:53.661214 systemd[1]: Switching root. Jan 23 18:56:53.788322 systemd-journald[203]: Journal stopped Jan 23 18:56:59.107293 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 18:56:59.107408 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:56:59.107436 kernel: SELinux: policy capability open_perms=1 Jan 23 18:56:59.107460 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:56:59.107543 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:56:59.107640 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:56:59.107664 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:56:59.107689 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:56:59.107706 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:56:59.107731 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:56:59.107759 kernel: audit: type=1403 audit(1769194614.309:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:56:59.107788 systemd[1]: Successfully loaded SELinux policy in 166.120ms. Jan 23 18:56:59.107808 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.902ms. Jan 23 18:56:59.107826 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:59.107850 systemd[1]: Detected virtualization kvm. Jan 23 18:56:59.107867 systemd[1]: Detected architecture x86-64. Jan 23 18:56:59.107887 systemd[1]: Detected first boot. Jan 23 18:56:59.107907 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:56:59.107925 zram_generator::config[1115]: No configuration found. Jan 23 18:56:59.107944 kernel: Guest personality initialized and is inactive Jan 23 18:56:59.108045 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:56:59.108063 kernel: Initialized host personality Jan 23 18:56:59.108079 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:56:59.108096 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:56:59.108115 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:56:59.108177 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:56:59.108231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:56:59.108243 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:56:59.108255 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:56:59.108267 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:56:59.108279 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:56:59.108290 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:56:59.108302 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:56:59.108318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:56:59.108330 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:56:59.108341 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:56:59.108357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:59.108378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:59.108400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:56:59.108422 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:56:59.108445 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:56:59.108468 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:59.108547 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:56:59.108569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:59.108592 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:59.108614 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:56:59.108636 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:56:59.108705 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:59.108727 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:56:59.108751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:59.108824 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:59.108843 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:59.108860 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:59.108877 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:56:59.108894 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:56:59.108912 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:56:59.108928 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:59.108941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:59.109056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:59.109083 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:56:59.109105 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:56:59.109126 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:56:59.109149 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:56:59.109166 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:59.109183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:56:59.109204 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:56:59.109224 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:56:59.109247 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:56:59.109367 systemd[1]: Reached target machines.target - Containers. Jan 23 18:56:59.109391 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:56:59.109412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:56:59.109432 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:59.109450 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:56:59.109529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:56:59.109555 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:56:59.109575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:56:59.109598 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:56:59.109619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:56:59.109642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:56:59.109664 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:56:59.109682 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:56:59.109699 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:56:59.109716 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:56:59.109734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:56:59.109758 kernel: ACPI: bus type drm_connector registered Jan 23 18:56:59.109780 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:59.109801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:59.109820 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:59.109837 kernel: fuse: init (API version 7.41) Jan 23 18:56:59.109922 kernel: loop: module loaded Jan 23 18:56:59.109942 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:56:59.110047 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:56:59.110115 systemd-journald[1200]: Collecting audit messages is disabled. Jan 23 18:56:59.110156 systemd-journald[1200]: Journal started Jan 23 18:56:59.110193 systemd-journald[1200]: Runtime Journal (/run/log/journal/0d00323c2f55489ab32ee00d02e7237b) is 6M, max 48.3M, 42.2M free. Jan 23 18:56:57.667120 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:56:57.692364 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:56:57.695436 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:56:57.715303 systemd[1]: systemd-journald.service: Consumed 1.752s CPU time. Jan 23 18:56:59.132851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:59.159768 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:56:59.159863 systemd[1]: Stopped verity-setup.service. Jan 23 18:56:59.181177 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:59.196048 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:59.205441 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:56:59.216898 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:56:59.227763 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:56:59.233680 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:56:59.244271 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:56:59.253274 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:56:59.266877 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:56:59.277571 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:59.284797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:56:59.285298 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:56:59.294105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:56:59.294581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:56:59.302682 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:56:59.303127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:56:59.311457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:56:59.313087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:56:59.325897 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:56:59.326367 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:56:59.337640 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:56:59.338058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:56:59.349709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:59.362132 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:59.371439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:56:59.385570 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:56:59.445059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:59.461405 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:59.473472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:56:59.486346 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:56:59.500356 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:56:59.500734 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:59.512410 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:56:59.526854 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:56:59.532763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:56:59.536587 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:56:59.569279 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:56:59.583778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:56:59.589677 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:56:59.594760 systemd-journald[1200]: Time spent on flushing to /var/log/journal/0d00323c2f55489ab32ee00d02e7237b is 78.243ms for 979 entries. Jan 23 18:56:59.594760 systemd-journald[1200]: System Journal (/var/log/journal/0d00323c2f55489ab32ee00d02e7237b) is 8M, max 195.6M, 187.6M free. Jan 23 18:56:59.700758 systemd-journald[1200]: Received client request to flush runtime journal. Jan 23 18:56:59.605267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:56:59.612454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:59.625550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:56:59.650706 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:56:59.663357 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:56:59.673633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:56:59.706440 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:56:59.716188 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:56:59.749404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:56:59.772619 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 18:56:59.778396 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:56:59.789808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:59.867338 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:56:59.887630 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:56:59.886254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:59.902768 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:56:59.904940 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:56:59.944303 kernel: loop1: detected capacity change from 0 to 229808 Jan 23 18:56:59.976743 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 23 18:56:59.976771 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 23 18:56:59.992142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:57:00.017728 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 18:57:00.149093 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 18:57:00.209340 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 18:57:00.253688 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 18:57:00.323120 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 18:57:00.328844 (sd-merge)[1259]: Merged extensions into '/usr'. Jan 23 18:57:00.348655 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:57:00.348720 systemd[1]: Reloading... Jan 23 18:57:00.477063 zram_generator::config[1284]: No configuration found. Jan 23 18:57:00.771917 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:57:00.819095 systemd[1]: Reloading finished in 469 ms. Jan 23 18:57:00.860275 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:57:00.871343 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:57:00.886197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:57:00.934672 systemd[1]: Starting ensure-sysext.service... Jan 23 18:57:00.943051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:57:00.957614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:57:00.997920 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:57:00.998665 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:57:01.000126 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:57:01.001827 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:57:01.002318 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:57:01.002378 systemd[1]: Reloading... Jan 23 18:57:01.004148 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:57:01.004647 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 23 18:57:01.004915 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 23 18:57:01.014852 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:57:01.014924 systemd-tmpfiles[1324]: Skipping /boot Jan 23 18:57:01.045271 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:57:01.045343 systemd-tmpfiles[1324]: Skipping /boot Jan 23 18:57:01.051057 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 23 18:57:01.127141 zram_generator::config[1352]: No configuration found. Jan 23 18:57:01.466088 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:57:01.475125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 18:57:01.484117 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:57:01.538319 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:57:01.538831 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:57:01.576377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:57:01.578107 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:57:01.588427 systemd[1]: Reloading finished in 585 ms. Jan 23 18:57:01.606219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:57:01.645144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:57:01.750420 systemd[1]: Finished ensure-sysext.service. Jan 23 18:57:01.764082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:01.766396 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:57:01.780383 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:57:01.787683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:01.792722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:57:01.809921 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:57:01.826793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:57:01.835487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:57:01.845244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:01.924062 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:57:01.931200 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:01.955682 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:57:02.047448 augenrules[1471]: No rules Jan 23 18:57:02.076345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:57:02.094308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:57:02.109576 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:57:02.121096 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:57:02.134269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:02.141657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:02.154755 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:57:02.206921 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:57:02.215485 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:57:02.225218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:57:02.226171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:57:02.246605 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:57:02.247150 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:57:02.255453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:57:02.255911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:57:02.277257 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:57:02.277666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:57:02.287810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:57:02.289111 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:57:02.366151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:57:02.387588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:57:02.387790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:57:02.395344 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:57:02.593929 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:57:02.595299 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:57:02.629655 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:57:03.127495 kernel: kvm_amd: TSC scaling supported Jan 23 18:57:03.129196 kernel: kvm_amd: Nested Virtualization enabled Jan 23 18:57:03.129291 kernel: kvm_amd: Nested Paging enabled Jan 23 18:57:03.129413 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 18:57:03.129610 kernel: kvm_amd: PMU virtualization is disabled Jan 23 18:57:03.392461 systemd-networkd[1468]: lo: Link UP Jan 23 18:57:03.417676 systemd-networkd[1468]: lo: Gained carrier Jan 23 18:57:03.426789 systemd-networkd[1468]: Enumeration completed Jan 23 18:57:03.429464 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:03.429473 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:57:03.431946 systemd-networkd[1468]: eth0: Link UP Jan 23 18:57:03.432374 systemd-networkd[1468]: eth0: Gained carrier Jan 23 18:57:03.432399 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:03.487243 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:57:03.490273 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:57:03.517194 systemd-networkd[1468]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:57:03.518051 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:57:03.521193 systemd-timesyncd[1478]: Network configuration changed, trying to establish connection. Jan 23 18:57:04.133951 systemd-timesyncd[1478]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 18:57:04.134030 systemd-timesyncd[1478]: Initial clock synchronization to Fri 2026-01-23 18:57:04.133676 UTC. Jan 23 18:57:04.134894 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:57:04.144963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:04.151157 systemd-resolved[1476]: Positive Trust Anchors: Jan 23 18:57:04.152027 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:57:04.152206 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:57:04.155758 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:57:04.159581 systemd-resolved[1476]: Defaulting to hostname 'linux'. Jan 23 18:57:04.165923 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:57:04.182084 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:57:04.194607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:57:04.222195 systemd[1]: Reached target network.target - Network. Jan 23 18:57:04.226786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:57:04.236379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:57:04.247577 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:57:04.257107 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:57:04.265536 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:57:04.275577 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:57:04.282784 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:57:04.291684 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:57:04.304620 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:57:04.304778 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:57:04.323371 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:57:04.333641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:57:04.344170 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:57:04.353037 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:57:04.361056 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:57:04.369941 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:57:04.380626 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:57:04.388792 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:57:04.427158 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:57:04.436768 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:57:04.449562 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:57:04.455719 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:57:04.463796 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:57:04.463944 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:57:04.467991 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:57:04.526679 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:57:04.547678 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:57:04.578497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:57:04.595172 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:57:04.617788 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:57:04.630345 jq[1517]: false Jan 23 18:57:04.630516 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:57:04.639099 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:57:04.652600 extend-filesystems[1518]: Found /dev/vda6 Jan 23 18:57:04.664592 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:57:04.679640 extend-filesystems[1518]: Found /dev/vda9 Jan 23 18:57:04.679640 extend-filesystems[1518]: Checking size of /dev/vda9 Jan 23 18:57:04.726406 extend-filesystems[1518]: Resized partition /dev/vda9 Jan 23 18:57:04.750608 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 18:57:04.722642 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Jan 23 18:57:04.751191 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Jan 23 18:57:04.684691 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:57:04.751762 extend-filesystems[1533]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:57:04.761502 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:57:04.781194 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:57:04.792639 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:57:04.817110 oslogin_cache_refresh[1519]: Failure getting users, quitting Jan 23 18:57:04.819940 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Jan 23 18:57:04.819940 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:57:04.819940 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Jan 23 18:57:04.819684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:57:04.817151 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:57:04.817367 oslogin_cache_refresh[1519]: Refreshing group entry cache Jan 23 18:57:04.823604 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:57:04.831777 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:57:04.834148 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Jan 23 18:57:04.834148 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:57:04.834124 oslogin_cache_refresh[1519]: Failure getting groups, quitting Jan 23 18:57:04.834149 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:57:04.844904 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:57:04.854945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:57:04.856910 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:57:04.858159 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:57:04.858658 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:57:04.869588 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:57:04.874352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:57:04.883977 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:57:04.884762 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:57:04.893510 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 18:57:04.920484 jq[1542]: true Jan 23 18:57:04.948143 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:57:04.948143 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 18:57:04.948143 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 18:57:04.987194 update_engine[1541]: I20260123 18:57:04.964071 1541 main.cc:92] Flatcar Update Engine starting Jan 23 18:57:04.987598 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Jan 23 18:57:04.952909 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:57:04.957539 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:57:04.957930 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:57:04.995116 jq[1556]: true Jan 23 18:57:04.992518 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:57:04.992550 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:57:04.993181 systemd-logind[1539]: New seat seat0. Jan 23 18:57:04.994961 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:57:05.020001 tar[1548]: linux-amd64/LICENSE Jan 23 18:57:05.020001 tar[1548]: linux-amd64/helm Jan 23 18:57:05.044909 dbus-daemon[1515]: [system] SELinux support is enabled Jan 23 18:57:05.047348 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:57:05.053054 update_engine[1541]: I20260123 18:57:05.052995 1541 update_check_scheduler.cc:74] Next update check in 9m28s Jan 23 18:57:05.059191 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:57:05.060426 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:57:05.067468 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:57:05.067537 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:57:05.077524 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 18:57:05.078781 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:57:05.092628 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:57:05.146624 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:57:05.176866 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:57:05.182388 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:57:05.186366 locksmithd[1576]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:57:05.191521 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:57:05.230484 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:57:05.247800 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:57:05.287750 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:57:05.289764 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:57:05.313351 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:57:05.351173 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:57:05.375715 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:57:05.379179 containerd[1549]: time="2026-01-23T18:57:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:57:05.381320 containerd[1549]: time="2026-01-23T18:57:05.380655237Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:57:05.391343 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:57:05.401697 containerd[1549]: time="2026-01-23T18:57:05.401637101Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.501µs" Jan 23 18:57:05.402025 containerd[1549]: time="2026-01-23T18:57:05.402000439Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:57:05.402103 containerd[1549]: time="2026-01-23T18:57:05.402087612Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:57:05.402735 containerd[1549]: time="2026-01-23T18:57:05.402710925Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:57:05.403053 containerd[1549]: time="2026-01-23T18:57:05.402796074Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403135628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403233180Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403352172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403718325Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403742721Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403769741Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403781824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:57:05.404346 containerd[1549]: time="2026-01-23T18:57:05.403976458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:57:05.405131 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:57:05.407424 containerd[1549]: time="2026-01-23T18:57:05.407396230Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:57:05.407527 containerd[1549]: time="2026-01-23T18:57:05.407507428Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:57:05.407584 containerd[1549]: time="2026-01-23T18:57:05.407569734Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:57:05.407917 containerd[1549]: time="2026-01-23T18:57:05.407894050Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:57:05.408471 containerd[1549]: time="2026-01-23T18:57:05.408444697Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:57:05.408712 containerd[1549]: time="2026-01-23T18:57:05.408691428Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:57:05.418955 containerd[1549]: time="2026-01-23T18:57:05.418909941Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:57:05.419477 containerd[1549]: time="2026-01-23T18:57:05.419377481Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:57:05.419477 containerd[1549]: time="2026-01-23T18:57:05.419447171Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:57:05.419477 containerd[1549]: time="2026-01-23T18:57:05.419467739Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419484340Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419497765Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419511581Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419526539Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419540175Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419554581Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:57:05.419572 containerd[1549]: time="2026-01-23T18:57:05.419565832Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:57:05.419744 containerd[1549]: time="2026-01-23T18:57:05.419580901Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:57:05.419773 containerd[1549]: time="2026-01-23T18:57:05.419753623Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:57:05.419800 containerd[1549]: time="2026-01-23T18:57:05.419776836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:57:05.419800 containerd[1549]: time="2026-01-23T18:57:05.419795240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:57:05.419928 containerd[1549]: time="2026-01-23T18:57:05.419809467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:57:05.419928 containerd[1549]: time="2026-01-23T18:57:05.419892883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:57:05.419928 containerd[1549]: time="2026-01-23T18:57:05.419906678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:57:05.419928 containerd[1549]: time="2026-01-23T18:57:05.419920775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:57:05.420043 containerd[1549]: time="2026-01-23T18:57:05.419934340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:57:05.420043 containerd[1549]: time="2026-01-23T18:57:05.419949688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:57:05.420043 containerd[1549]: time="2026-01-23T18:57:05.419966019Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:57:05.420043 containerd[1549]: time="2026-01-23T18:57:05.419979765Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:57:05.420043 containerd[1549]: time="2026-01-23T18:57:05.420034427Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:57:05.420406 containerd[1549]: time="2026-01-23T18:57:05.420054334Z" level=info msg="Start snapshots syncer" Jan 23 18:57:05.420406 containerd[1549]: time="2026-01-23T18:57:05.420209163Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:57:05.420870 containerd[1549]: time="2026-01-23T18:57:05.420687977Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:57:05.420870 containerd[1549]: time="2026-01-23T18:57:05.420798824Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.420948694Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421142595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421179605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421193260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421213438Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421227433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421324074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:57:05.421344 containerd[1549]: time="2026-01-23T18:57:05.421340305Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:57:05.421563 containerd[1549]: time="2026-01-23T18:57:05.421364861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:57:05.421591 containerd[1549]: time="2026-01-23T18:57:05.421565555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:57:05.421619 containerd[1549]: time="2026-01-23T18:57:05.421594078Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421693394Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421719191Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421732576Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421750209Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421761811Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421776098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:57:05.421980 containerd[1549]: time="2026-01-23T18:57:05.421967275Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:57:05.422336 containerd[1549]: time="2026-01-23T18:57:05.421996529Z" level=info msg="runtime interface created" Jan 23 18:57:05.422336 containerd[1549]: time="2026-01-23T18:57:05.422006728Z" level=info msg="created NRI interface" Jan 23 18:57:05.422336 containerd[1549]: time="2026-01-23T18:57:05.422018069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:57:05.422336 containerd[1549]: time="2026-01-23T18:57:05.422035492Z" level=info msg="Connect containerd service" Jan 23 18:57:05.422336 containerd[1549]: time="2026-01-23T18:57:05.422068574Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:57:05.423991 containerd[1549]: time="2026-01-23T18:57:05.423880185Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:57:05.613090 containerd[1549]: time="2026-01-23T18:57:05.612770595Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:57:05.613090 containerd[1549]: time="2026-01-23T18:57:05.612936435Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:57:05.613090 containerd[1549]: time="2026-01-23T18:57:05.612968975Z" level=info msg="Start subscribing containerd event" Jan 23 18:57:05.613090 containerd[1549]: time="2026-01-23T18:57:05.613001906Z" level=info msg="Start recovering state" Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613165833Z" level=info msg="Start event monitor" Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613185710Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613196770Z" level=info msg="Start streaming server" Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613207741Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613217429Z" level=info msg="runtime interface starting up..." Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613225714Z" level=info msg="starting plugins..." Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613338214Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:57:05.615632 containerd[1549]: time="2026-01-23T18:57:05.613551312Z" level=info msg="containerd successfully booted in 0.234989s" Jan 23 18:57:05.614094 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:57:05.700366 tar[1548]: linux-amd64/README.md Jan 23 18:57:05.752007 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:57:05.907683 systemd-networkd[1468]: eth0: Gained IPv6LL Jan 23 18:57:05.926683 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:57:05.939410 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:57:05.947885 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 18:57:05.959439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:05.974205 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:57:06.041874 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:57:06.053128 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 18:57:06.053796 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 18:57:06.065714 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:57:08.453602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:08.473020 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:57:08.492387 systemd[1]: Startup finished in 6.228s (kernel) + 23.405s (initrd) + 13.729s (userspace) = 43.363s. Jan 23 18:57:08.497089 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:11.152230 kubelet[1648]: E0123 18:57:11.150981 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:11.166440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:11.166784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:11.167737 systemd[1]: kubelet.service: Consumed 2.040s CPU time, 269.5M memory peak. Jan 23 18:57:14.365228 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:57:14.400775 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:41914.service - OpenSSH per-connection server daemon (10.0.0.1:41914). Jan 23 18:57:15.990882 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1286703055 wd_nsec: 1286702459 Jan 23 18:57:16.480359 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 41914 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:16.486815 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:16.537747 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:57:16.541760 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:57:16.566031 systemd-logind[1539]: New session 1 of user core. Jan 23 18:57:16.595615 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:57:16.613072 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:57:16.656449 (systemd)[1667]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:57:16.686106 systemd-logind[1539]: New session c1 of user core. Jan 23 18:57:17.042440 systemd[1667]: Queued start job for default target default.target. Jan 23 18:57:17.076192 systemd[1667]: Created slice app.slice - User Application Slice. Jan 23 18:57:17.076348 systemd[1667]: Reached target paths.target - Paths. Jan 23 18:57:17.076425 systemd[1667]: Reached target timers.target - Timers. Jan 23 18:57:17.084735 systemd[1667]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:57:17.178088 systemd[1667]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:57:17.180509 systemd[1667]: Reached target sockets.target - Sockets. Jan 23 18:57:17.180793 systemd[1667]: Reached target basic.target - Basic System. Jan 23 18:57:17.180870 systemd[1667]: Reached target default.target - Main User Target. Jan 23 18:57:17.180998 systemd[1667]: Startup finished in 444ms. Jan 23 18:57:17.184405 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:57:17.198776 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:57:17.299517 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:40646.service - OpenSSH per-connection server daemon (10.0.0.1:40646). Jan 23 18:57:17.430007 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 40646 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:17.432681 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:17.449373 systemd-logind[1539]: New session 2 of user core. Jan 23 18:57:17.468875 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:57:17.570627 sshd[1681]: Connection closed by 10.0.0.1 port 40646 Jan 23 18:57:17.571913 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:17.596747 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:40646.service: Deactivated successfully. Jan 23 18:57:17.602079 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:57:17.604798 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:57:17.612584 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Jan 23 18:57:17.620009 systemd-logind[1539]: Removed session 2. Jan 23 18:57:17.738224 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:17.743504 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:17.785817 systemd-logind[1539]: New session 3 of user core. Jan 23 18:57:17.800637 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:57:17.898174 sshd[1690]: Connection closed by 10.0.0.1 port 40656 Jan 23 18:57:17.899633 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:17.917423 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:40656.service: Deactivated successfully. Jan 23 18:57:17.920811 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:57:17.923450 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:57:17.928476 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:40666.service - OpenSSH per-connection server daemon (10.0.0.1:40666). Jan 23 18:57:17.930502 systemd-logind[1539]: Removed session 3. Jan 23 18:57:18.048110 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:18.053010 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:18.073211 systemd-logind[1539]: New session 4 of user core. Jan 23 18:57:18.082716 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:57:18.162360 sshd[1699]: Connection closed by 10.0.0.1 port 40666 Jan 23 18:57:18.167591 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:18.179014 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:40666.service: Deactivated successfully. Jan 23 18:57:18.182891 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:57:18.186733 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:57:18.191783 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:40682.service - OpenSSH per-connection server daemon (10.0.0.1:40682). Jan 23 18:57:18.194520 systemd-logind[1539]: Removed session 4. Jan 23 18:57:18.278080 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 40682 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:18.280559 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:18.289770 systemd-logind[1539]: New session 5 of user core. Jan 23 18:57:18.299702 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:57:18.433716 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:57:18.434449 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:18.468509 sudo[1709]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:18.472135 sshd[1708]: Connection closed by 10.0.0.1 port 40682 Jan 23 18:57:18.473533 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:18.498374 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:40682.service: Deactivated successfully. Jan 23 18:57:18.502750 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:57:18.506569 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:57:18.510365 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:40694.service - OpenSSH per-connection server daemon (10.0.0.1:40694). Jan 23 18:57:18.514495 systemd-logind[1539]: Removed session 5. Jan 23 18:57:18.617384 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 40694 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:18.620088 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:18.633916 systemd-logind[1539]: New session 6 of user core. Jan 23 18:57:18.645745 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:57:18.737821 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:57:18.738549 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:18.762494 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:18.775585 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:57:18.776206 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:18.811758 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:57:18.936787 augenrules[1742]: No rules Jan 23 18:57:18.941523 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:57:18.943479 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:57:18.946548 sudo[1719]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:18.952524 sshd[1718]: Connection closed by 10.0.0.1 port 40694 Jan 23 18:57:18.953515 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:18.975455 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:40694.service: Deactivated successfully. Jan 23 18:57:18.980143 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:57:18.983340 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:57:18.989441 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:40704.service - OpenSSH per-connection server daemon (10.0.0.1:40704). Jan 23 18:57:18.996580 systemd-logind[1539]: Removed session 6. Jan 23 18:57:19.091622 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 40704 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 18:57:19.092630 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:19.126194 systemd-logind[1539]: New session 7 of user core. Jan 23 18:57:19.142584 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:57:19.239070 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:57:19.240726 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:21.377152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:57:21.384210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:22.486864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:22.759194 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:23.541729 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:57:23.553451 kubelet[1784]: E0123 18:57:23.552693 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:23.563481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:23.563814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:23.571852 systemd[1]: kubelet.service: Consumed 1.371s CPU time, 109M memory peak. Jan 23 18:57:23.572156 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:57:27.928565 dockerd[1795]: time="2026-01-23T18:57:27.927183699Z" level=info msg="Starting up" Jan 23 18:57:27.937173 dockerd[1795]: time="2026-01-23T18:57:27.936919502Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:57:28.035971 dockerd[1795]: time="2026-01-23T18:57:28.035441054Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:57:28.175845 systemd[1]: var-lib-docker-metacopy\x2dcheck1227080976-merged.mount: Deactivated successfully. Jan 23 18:57:28.233681 dockerd[1795]: time="2026-01-23T18:57:28.232143731Z" level=info msg="Loading containers: start." Jan 23 18:57:28.317660 kernel: Initializing XFRM netlink socket Jan 23 18:57:32.388199 systemd-networkd[1468]: docker0: Link UP Jan 23 18:57:32.418433 dockerd[1795]: time="2026-01-23T18:57:32.415891285Z" level=info msg="Loading containers: done." Jan 23 18:57:32.563070 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck875934123-merged.mount: Deactivated successfully. Jan 23 18:57:32.591768 dockerd[1795]: time="2026-01-23T18:57:32.589943062Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:57:32.591768 dockerd[1795]: time="2026-01-23T18:57:32.592072688Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:57:32.593646 dockerd[1795]: time="2026-01-23T18:57:32.592745904Z" level=info msg="Initializing buildkit" Jan 23 18:57:32.934154 dockerd[1795]: time="2026-01-23T18:57:32.933625060Z" level=info msg="Completed buildkit initialization" Jan 23 18:57:32.981380 dockerd[1795]: time="2026-01-23T18:57:32.980438630Z" level=info msg="Daemon has completed initialization" Jan 23 18:57:32.981749 dockerd[1795]: time="2026-01-23T18:57:32.981009435Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:57:32.984568 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:57:33.628743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:57:33.650473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:34.978070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:35.290036 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:36.233106 kubelet[2019]: E0123 18:57:36.231217 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:36.247463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:36.247837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:36.249933 systemd[1]: kubelet.service: Consumed 1.965s CPU time, 110.7M memory peak. Jan 23 18:57:37.172186 containerd[1549]: time="2026-01-23T18:57:37.171353929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 18:57:37.950761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268861085.mount: Deactivated successfully. Jan 23 18:57:46.386907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:57:46.404649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:47.986836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:48.016749 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:49.727418 kubelet[2097]: E0123 18:57:49.726414 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:49.734718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:49.735450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:49.736727 systemd[1]: kubelet.service: Consumed 2.895s CPU time, 111.2M memory peak. Jan 23 18:57:50.205965 update_engine[1541]: I20260123 18:57:50.204934 1541 update_attempter.cc:509] Updating boot flags... Jan 23 18:57:52.190191 containerd[1549]: time="2026-01-23T18:57:52.190140443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:52.195086 containerd[1549]: time="2026-01-23T18:57:52.195048033Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 18:57:52.214452 containerd[1549]: time="2026-01-23T18:57:52.214146262Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:52.233593 containerd[1549]: time="2026-01-23T18:57:52.233467529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:52.242413 containerd[1549]: time="2026-01-23T18:57:52.241440508Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 15.069872869s" Jan 23 18:57:52.242413 containerd[1549]: time="2026-01-23T18:57:52.241649547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 18:57:52.385069 containerd[1549]: time="2026-01-23T18:57:52.384657446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 18:57:59.874012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:57:59.879469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:00.422905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:00.514871 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:01.352702 kubelet[2136]: E0123 18:58:01.352053 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:01.363064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:01.363670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:01.364828 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 109.9M memory peak. Jan 23 18:58:01.993803 containerd[1549]: time="2026-01-23T18:58:01.992972237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:01.998745 containerd[1549]: time="2026-01-23T18:58:01.996757412Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 18:58:02.017466 containerd[1549]: time="2026-01-23T18:58:02.016755806Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:02.026867 containerd[1549]: time="2026-01-23T18:58:02.026214055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:02.028364 containerd[1549]: time="2026-01-23T18:58:02.028084611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 9.638287082s" Jan 23 18:58:02.028364 containerd[1549]: time="2026-01-23T18:58:02.028138530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 18:58:02.032440 containerd[1549]: time="2026-01-23T18:58:02.031816098Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 18:58:04.649806 containerd[1549]: time="2026-01-23T18:58:04.647151243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.651065 containerd[1549]: time="2026-01-23T18:58:04.650215197Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 18:58:04.653864 containerd[1549]: time="2026-01-23T18:58:04.653629406Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.661412 containerd[1549]: time="2026-01-23T18:58:04.660821442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.662064 containerd[1549]: time="2026-01-23T18:58:04.661944529Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.630023238s" Jan 23 18:58:04.662233 containerd[1549]: time="2026-01-23T18:58:04.662200830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 18:58:04.664046 containerd[1549]: time="2026-01-23T18:58:04.664025431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:58:11.338717 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4636710907 wd_nsec: 4636708474 Jan 23 18:58:11.547841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 18:58:11.681380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:12.333653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:12.368233 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:12.580758 kubelet[2162]: E0123 18:58:12.579625 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:12.593748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:12.594098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:12.606167 systemd[1]: kubelet.service: Consumed 650ms CPU time, 111.3M memory peak. Jan 23 18:58:12.810335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544340523.mount: Deactivated successfully. Jan 23 18:58:14.880037 containerd[1549]: time="2026-01-23T18:58:14.879773676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:14.883692 containerd[1549]: time="2026-01-23T18:58:14.883588190Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 18:58:14.894084 containerd[1549]: time="2026-01-23T18:58:14.893176132Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:14.907875 containerd[1549]: time="2026-01-23T18:58:14.907579066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:14.909388 containerd[1549]: time="2026-01-23T18:58:14.909032115Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 10.244878808s" Jan 23 18:58:14.909388 containerd[1549]: time="2026-01-23T18:58:14.909131469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:58:14.920917 containerd[1549]: time="2026-01-23T18:58:14.920715722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 18:58:15.969513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603383650.mount: Deactivated successfully. Jan 23 18:58:19.732938 containerd[1549]: time="2026-01-23T18:58:19.730881940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:19.734234 containerd[1549]: time="2026-01-23T18:58:19.733215299Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 18:58:19.739882 containerd[1549]: time="2026-01-23T18:58:19.739716456Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:19.746654 containerd[1549]: time="2026-01-23T18:58:19.745637071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:19.746821 containerd[1549]: time="2026-01-23T18:58:19.746786524Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.826021851s" Jan 23 18:58:19.747607 containerd[1549]: time="2026-01-23T18:58:19.746902048Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 18:58:19.750112 containerd[1549]: time="2026-01-23T18:58:19.749917448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:58:20.459965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563240219.mount: Deactivated successfully. Jan 23 18:58:20.476197 containerd[1549]: time="2026-01-23T18:58:20.476002175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:20.479730 containerd[1549]: time="2026-01-23T18:58:20.479452702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 18:58:20.483445 containerd[1549]: time="2026-01-23T18:58:20.483339477Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:20.488172 containerd[1549]: time="2026-01-23T18:58:20.487945050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:20.489017 containerd[1549]: time="2026-01-23T18:58:20.488658558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 738.650611ms" Jan 23 18:58:20.489017 containerd[1549]: time="2026-01-23T18:58:20.488745920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:58:20.491308 containerd[1549]: time="2026-01-23T18:58:20.491169155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 18:58:21.292889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632427575.mount: Deactivated successfully. Jan 23 18:58:23.132884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 18:58:23.171709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:24.393775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:24.440761 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:24.891144 kubelet[2252]: E0123 18:58:24.890665 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:24.898375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:24.898777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:24.899677 systemd[1]: kubelet.service: Consumed 1.143s CPU time, 110.7M memory peak. Jan 23 18:58:35.127388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 18:58:35.215720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:36.125954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:36.247073 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:37.237142 kubelet[2311]: E0123 18:58:37.235459 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:37.253364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:37.253801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:37.258512 systemd[1]: kubelet.service: Consumed 1.432s CPU time, 110.5M memory peak. Jan 23 18:58:38.127176 containerd[1549]: time="2026-01-23T18:58:38.126900060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:38.134787 containerd[1549]: time="2026-01-23T18:58:38.133772569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 18:58:38.138042 containerd[1549]: time="2026-01-23T18:58:38.137875757Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:38.145373 containerd[1549]: time="2026-01-23T18:58:38.145149554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:38.151913 containerd[1549]: time="2026-01-23T18:58:38.148017091Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 17.656623279s" Jan 23 18:58:38.151913 containerd[1549]: time="2026-01-23T18:58:38.148472780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 18:58:47.376549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 23 18:58:47.382373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:47.766608 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:58:47.766875 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:58:47.768638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:47.770979 systemd[1]: kubelet.service: Consumed 250ms CPU time, 74.3M memory peak. Jan 23 18:58:47.778979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:47.877553 systemd[1]: Reload requested from client PID 2355 ('systemctl') (unit session-7.scope)... Jan 23 18:58:47.877633 systemd[1]: Reloading... Jan 23 18:58:48.080424 zram_generator::config[2398]: No configuration found. Jan 23 18:58:49.144018 systemd[1]: Reloading finished in 1265 ms. Jan 23 18:58:49.332233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:58:49.332641 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:58:49.333675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:49.333963 systemd[1]: kubelet.service: Consumed 271ms CPU time, 98.2M memory peak. Jan 23 18:58:49.341776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:50.068599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:50.148607 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:58:50.387921 kubelet[2446]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:58:50.387921 kubelet[2446]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:58:50.387921 kubelet[2446]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:58:50.387921 kubelet[2446]: I0123 18:58:50.387138 2446 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:58:51.645112 kubelet[2446]: I0123 18:58:51.642481 2446 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:58:51.645112 kubelet[2446]: I0123 18:58:51.642581 2446 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:58:51.645112 kubelet[2446]: I0123 18:58:51.642938 2446 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:58:51.747984 kubelet[2446]: I0123 18:58:51.745675 2446 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:58:51.759513 kubelet[2446]: E0123 18:58:51.759148 2446 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:58:51.812109 kubelet[2446]: I0123 18:58:51.811970 2446 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:58:51.841110 kubelet[2446]: I0123 18:58:51.840067 2446 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:58:51.841110 kubelet[2446]: I0123 18:58:51.840719 2446 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:58:51.841110 kubelet[2446]: I0123 18:58:51.840821 2446 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:58:51.841110 kubelet[2446]: I0123 18:58:51.841099 2446 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:58:51.842814 kubelet[2446]: I0123 18:58:51.841111 2446 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:58:51.842814 kubelet[2446]: I0123 18:58:51.841551 2446 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:58:51.848660 kubelet[2446]: I0123 18:58:51.847481 2446 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:58:51.848660 kubelet[2446]: I0123 18:58:51.847541 2446 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:58:51.848660 kubelet[2446]: I0123 18:58:51.847588 2446 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:58:51.848660 kubelet[2446]: I0123 18:58:51.847674 2446 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:58:51.871162 kubelet[2446]: E0123 18:58:51.870867 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:58:51.873991 kubelet[2446]: E0123 18:58:51.872632 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:58:51.875111 kubelet[2446]: I0123 18:58:51.874214 2446 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:58:51.877057 kubelet[2446]: I0123 18:58:51.876520 2446 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:58:51.881994 kubelet[2446]: W0123 18:58:51.880924 2446 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:58:51.919703 kubelet[2446]: I0123 18:58:51.917114 2446 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:58:51.922585 kubelet[2446]: I0123 18:58:51.920886 2446 server.go:1289] "Started kubelet" Jan 23 18:58:51.923933 kubelet[2446]: I0123 18:58:51.923085 2446 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:58:51.928886 kubelet[2446]: I0123 18:58:51.923083 2446 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:58:51.930103 kubelet[2446]: I0123 18:58:51.929870 2446 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:58:51.931920 kubelet[2446]: I0123 18:58:51.931810 2446 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:58:51.936493 kubelet[2446]: I0123 18:58:51.936399 2446 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:58:51.940820 kubelet[2446]: I0123 18:58:51.940703 2446 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:58:51.949680 kubelet[2446]: E0123 18:58:51.943636 2446 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d713c2e767f1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:58:51.91796713 +0000 UTC m=+1.756277890,LastTimestamp:2026-01-23 18:58:51.91796713 +0000 UTC m=+1.756277890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:58:51.956408 kubelet[2446]: I0123 18:58:51.955003 2446 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:58:51.956408 kubelet[2446]: I0123 18:58:51.955229 2446 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:58:51.958872 kubelet[2446]: E0123 18:58:51.958221 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:58:51.959898 kubelet[2446]: E0123 18:58:51.959416 2446 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:58:51.970172 kubelet[2446]: I0123 18:58:51.969477 2446 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:58:51.972402 kubelet[2446]: I0123 18:58:51.972359 2446 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:58:51.972647 kubelet[2446]: I0123 18:58:51.972486 2446 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:58:51.973633 kubelet[2446]: E0123 18:58:51.973448 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Jan 23 18:58:51.990203 kubelet[2446]: E0123 18:58:51.989191 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:58:51.994406 kubelet[2446]: I0123 18:58:51.994182 2446 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:58:52.058864 kubelet[2446]: E0123 18:58:52.058827 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:58:52.121177 kubelet[2446]: I0123 18:58:52.121061 2446 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:58:52.121177 kubelet[2446]: I0123 18:58:52.121094 2446 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:58:52.121177 kubelet[2446]: I0123 18:58:52.121125 2446 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:58:52.150867 kubelet[2446]: I0123 18:58:52.150625 2446 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:58:52.159631 kubelet[2446]: E0123 18:58:52.159481 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:58:52.160504 kubelet[2446]: I0123 18:58:52.159541 2446 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:58:52.160504 kubelet[2446]: I0123 18:58:52.160083 2446 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:58:52.160504 kubelet[2446]: I0123 18:58:52.160119 2446 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:58:52.160504 kubelet[2446]: I0123 18:58:52.160133 2446 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:58:52.160504 kubelet[2446]: E0123 18:58:52.160198 2446 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:58:52.163431 kubelet[2446]: E0123 18:58:52.162517 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:58:52.176631 kubelet[2446]: E0123 18:58:52.176199 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Jan 23 18:58:52.214163 kubelet[2446]: I0123 18:58:52.213882 2446 policy_none.go:49] "None policy: Start" Jan 23 18:58:52.214163 kubelet[2446]: I0123 18:58:52.213969 2446 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:58:52.214163 kubelet[2446]: I0123 18:58:52.213999 2446 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:58:52.250955 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:58:52.261479 kubelet[2446]: E0123 18:58:52.261178 2446 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:58:52.261479 kubelet[2446]: E0123 18:58:52.261457 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:58:52.290706 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:58:52.299427 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:58:52.326042 kubelet[2446]: E0123 18:58:52.324732 2446 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:58:52.326042 kubelet[2446]: I0123 18:58:52.325039 2446 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:58:52.326042 kubelet[2446]: I0123 18:58:52.325054 2446 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:58:52.326042 kubelet[2446]: I0123 18:58:52.325929 2446 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:58:52.331977 kubelet[2446]: E0123 18:58:52.331547 2446 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:58:52.331977 kubelet[2446]: E0123 18:58:52.331799 2446 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:58:52.432462 kubelet[2446]: I0123 18:58:52.430579 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:58:52.432462 kubelet[2446]: E0123 18:58:52.431058 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 23 18:58:52.477206 kubelet[2446]: I0123 18:58:52.477087 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15cefdf50543f0acb1141bef276aa675-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15cefdf50543f0acb1141bef276aa675\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:58:52.518502 systemd[1]: Created slice kubepods-burstable-pod15cefdf50543f0acb1141bef276aa675.slice - libcontainer container kubepods-burstable-pod15cefdf50543f0acb1141bef276aa675.slice. Jan 23 18:58:52.545806 kubelet[2446]: E0123 18:58:52.545010 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:52.552917 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 18:58:52.561944 kubelet[2446]: E0123 18:58:52.561625 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:52.567914 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 18:58:52.575126 kubelet[2446]: E0123 18:58:52.574983 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:52.578219 kubelet[2446]: I0123 18:58:52.577465 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15cefdf50543f0acb1141bef276aa675-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15cefdf50543f0acb1141bef276aa675\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:58:52.578219 kubelet[2446]: E0123 18:58:52.577692 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Jan 23 18:58:52.578219 kubelet[2446]: I0123 18:58:52.577902 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:58:52.578219 kubelet[2446]: I0123 18:58:52.577932 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:58:52.578219 kubelet[2446]: I0123 18:58:52.577949 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15cefdf50543f0acb1141bef276aa675-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15cefdf50543f0acb1141bef276aa675\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:58:52.578978 kubelet[2446]: I0123 18:58:52.577964 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:58:52.578978 kubelet[2446]: I0123 18:58:52.577977 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:58:52.578978 kubelet[2446]: I0123 18:58:52.577989 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:58:52.578978 kubelet[2446]: I0123 18:58:52.578002 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:58:52.636593 kubelet[2446]: I0123 18:58:52.633954 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:58:52.636593 kubelet[2446]: E0123 18:58:52.635093 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 23 18:58:52.812860 kubelet[2446]: E0123 18:58:52.808558 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:58:52.849958 kubelet[2446]: E0123 18:58:52.846568 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:52.850130 containerd[1549]: time="2026-01-23T18:58:52.847580571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15cefdf50543f0acb1141bef276aa675,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:52.864737 kubelet[2446]: E0123 18:58:52.864162 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:52.870468 containerd[1549]: time="2026-01-23T18:58:52.865507485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:52.880717 kubelet[2446]: E0123 18:58:52.876580 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:52.881146 containerd[1549]: time="2026-01-23T18:58:52.877219727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:52.993374 containerd[1549]: time="2026-01-23T18:58:52.993095164Z" level=info msg="connecting to shim 50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1" address="unix:///run/containerd/s/2bfd39e64a183a01c0ae924eaaafeaf96e15a94d1eba2d53da10eff496f7a8d4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:53.019421 containerd[1549]: time="2026-01-23T18:58:53.019233021Z" level=info msg="connecting to shim 7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046" address="unix:///run/containerd/s/d3d066faaa7c15802e719cf64b9a8f8ec9364c7419f0dc5a898865854c29de6f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:53.030030 containerd[1549]: time="2026-01-23T18:58:53.029485375Z" level=info msg="connecting to shim 41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb" address="unix:///run/containerd/s/9cc9587d2fae7f33c94d38710ec8c268a713b8283821e4da224a45318fae1d1c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:53.042202 kubelet[2446]: I0123 18:58:53.041965 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:58:53.044812 kubelet[2446]: E0123 18:58:53.044495 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 23 18:58:53.060535 kubelet[2446]: E0123 18:58:53.060398 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:58:53.112596 systemd[1]: Started cri-containerd-50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1.scope - libcontainer container 50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1. Jan 23 18:58:53.131154 kubelet[2446]: E0123 18:58:53.131022 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:58:53.162938 systemd[1]: Started cri-containerd-7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046.scope - libcontainer container 7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046. Jan 23 18:58:53.181618 systemd[1]: Started cri-containerd-41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb.scope - libcontainer container 41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb. Jan 23 18:58:53.310842 kubelet[2446]: E0123 18:58:53.310720 2446 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:58:53.360158 containerd[1549]: time="2026-01-23T18:58:53.359203574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15cefdf50543f0acb1141bef276aa675,Namespace:kube-system,Attempt:0,} returns sandbox id \"50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1\"" Jan 23 18:58:53.361469 kubelet[2446]: E0123 18:58:53.361087 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:53.369102 containerd[1549]: time="2026-01-23T18:58:53.368442880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb\"" Jan 23 18:58:53.377502 kubelet[2446]: E0123 18:58:53.377210 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:53.380017 kubelet[2446]: E0123 18:58:53.378638 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Jan 23 18:58:53.383373 containerd[1549]: time="2026-01-23T18:58:53.383037900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046\"" Jan 23 18:58:53.384144 kubelet[2446]: E0123 18:58:53.384013 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:53.387145 containerd[1549]: time="2026-01-23T18:58:53.387060190Z" level=info msg="CreateContainer within sandbox \"50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:58:53.396720 containerd[1549]: time="2026-01-23T18:58:53.396555986Z" level=info msg="CreateContainer within sandbox \"41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:58:53.400023 containerd[1549]: time="2026-01-23T18:58:53.399205742Z" level=info msg="CreateContainer within sandbox \"7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:58:53.412618 containerd[1549]: time="2026-01-23T18:58:53.412524141Z" level=info msg="Container 35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:53.432162 containerd[1549]: time="2026-01-23T18:58:53.431955677Z" level=info msg="CreateContainer within sandbox \"50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f\"" Jan 23 18:58:53.433676 containerd[1549]: time="2026-01-23T18:58:53.433120179Z" level=info msg="StartContainer for \"35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f\"" Jan 23 18:58:53.435970 containerd[1549]: time="2026-01-23T18:58:53.435620985Z" level=info msg="connecting to shim 35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f" address="unix:///run/containerd/s/2bfd39e64a183a01c0ae924eaaafeaf96e15a94d1eba2d53da10eff496f7a8d4" protocol=ttrpc version=3 Jan 23 18:58:53.448058 containerd[1549]: time="2026-01-23T18:58:53.445890121Z" level=info msg="Container c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:53.460186 containerd[1549]: time="2026-01-23T18:58:53.459919501Z" level=info msg="Container afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:53.486449 containerd[1549]: time="2026-01-23T18:58:53.485747320Z" level=info msg="CreateContainer within sandbox \"7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9\"" Jan 23 18:58:53.487875 containerd[1549]: time="2026-01-23T18:58:53.487506527Z" level=info msg="StartContainer for \"c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9\"" Jan 23 18:58:53.493514 containerd[1549]: time="2026-01-23T18:58:53.493412205Z" level=info msg="connecting to shim c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9" address="unix:///run/containerd/s/d3d066faaa7c15802e719cf64b9a8f8ec9364c7419f0dc5a898865854c29de6f" protocol=ttrpc version=3 Jan 23 18:58:53.498605 systemd[1]: Started cri-containerd-35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f.scope - libcontainer container 35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f. Jan 23 18:58:53.506937 containerd[1549]: time="2026-01-23T18:58:53.506220472Z" level=info msg="CreateContainer within sandbox \"41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40\"" Jan 23 18:58:53.509956 containerd[1549]: time="2026-01-23T18:58:53.509838603Z" level=info msg="StartContainer for \"afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40\"" Jan 23 18:58:53.512704 containerd[1549]: time="2026-01-23T18:58:53.512405444Z" level=info msg="connecting to shim afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40" address="unix:///run/containerd/s/9cc9587d2fae7f33c94d38710ec8c268a713b8283821e4da224a45318fae1d1c" protocol=ttrpc version=3 Jan 23 18:58:53.571185 systemd[1]: Started cri-containerd-c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9.scope - libcontainer container c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9. Jan 23 18:58:53.599187 systemd[1]: Started cri-containerd-afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40.scope - libcontainer container afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40. Jan 23 18:58:53.722029 kubelet[2446]: E0123 18:58:53.720680 2446 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d713c2e767f1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:58:51.91796713 +0000 UTC m=+1.756277890,LastTimestamp:2026-01-23 18:58:51.91796713 +0000 UTC m=+1.756277890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:58:53.730181 containerd[1549]: time="2026-01-23T18:58:53.730049344Z" level=info msg="StartContainer for \"35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f\" returns successfully" Jan 23 18:58:53.794973 containerd[1549]: time="2026-01-23T18:58:53.793943036Z" level=info msg="StartContainer for \"afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40\" returns successfully" Jan 23 18:58:53.795954 kubelet[2446]: E0123 18:58:53.795352 2446 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:58:53.805170 containerd[1549]: time="2026-01-23T18:58:53.804451693Z" level=info msg="StartContainer for \"c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9\" returns successfully" Jan 23 18:58:53.853928 kubelet[2446]: I0123 18:58:53.853614 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:58:53.854659 kubelet[2446]: E0123 18:58:53.854123 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 23 18:58:54.198430 kubelet[2446]: E0123 18:58:54.196564 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:54.198430 kubelet[2446]: E0123 18:58:54.196761 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:54.221371 kubelet[2446]: E0123 18:58:54.220233 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:54.221371 kubelet[2446]: E0123 18:58:54.220552 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:54.234868 kubelet[2446]: E0123 18:58:54.234736 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:54.235129 kubelet[2446]: E0123 18:58:54.235031 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:55.233139 kubelet[2446]: E0123 18:58:55.232916 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:55.240399 kubelet[2446]: E0123 18:58:55.235621 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:55.240399 kubelet[2446]: E0123 18:58:55.235745 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:55.240399 kubelet[2446]: E0123 18:58:55.236631 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:55.240399 kubelet[2446]: E0123 18:58:55.238634 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:55.240399 kubelet[2446]: E0123 18:58:55.238845 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:55.479547 kubelet[2446]: I0123 18:58:55.478212 2446 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:58:56.257459 kubelet[2446]: E0123 18:58:56.256951 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:58:56.257459 kubelet[2446]: E0123 18:58:56.257091 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:02.268685 kubelet[2446]: E0123 18:59:02.265566 2446 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:59:02.490415 kubelet[2446]: E0123 18:59:02.294975 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:02.490415 kubelet[2446]: E0123 18:59:02.386491 2446 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:59:04.581822 kubelet[2446]: E0123 18:59:04.581501 2446 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 18:59:04.745929 kubelet[2446]: E0123 18:59:04.744028 2446 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d713c2e767f1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:58:51.91796713 +0000 UTC m=+1.756277890,LastTimestamp:2026-01-23 18:58:51.91796713 +0000 UTC m=+1.756277890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:59:04.816576 kubelet[2446]: I0123 18:59:04.815665 2446 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:59:04.816576 kubelet[2446]: E0123 18:59:04.815784 2446 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 18:59:04.870409 kubelet[2446]: I0123 18:59:04.868055 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:59:04.923489 kubelet[2446]: I0123 18:59:04.921386 2446 apiserver.go:52] "Watching apiserver" Jan 23 18:59:04.978534 kubelet[2446]: I0123 18:59:04.977836 2446 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:59:05.038801 kubelet[2446]: E0123 18:59:05.037612 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 18:59:05.038801 kubelet[2446]: I0123 18:59:05.037650 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:59:05.038801 kubelet[2446]: I0123 18:59:05.038111 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:05.045453 kubelet[2446]: E0123 18:59:05.042644 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:05.045453 kubelet[2446]: E0123 18:59:05.044554 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 18:59:05.045453 kubelet[2446]: I0123 18:59:05.044573 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:05.074738 kubelet[2446]: E0123 18:59:05.074617 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:05.084325 kubelet[2446]: E0123 18:59:05.083907 2446 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:05.362175 kubelet[2446]: I0123 18:59:05.361420 2446 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:59:05.397684 kubelet[2446]: E0123 18:59:05.396162 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:05.914046 kubelet[2446]: E0123 18:59:05.913051 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:09.843794 systemd[1]: Reload requested from client PID 2736 ('systemctl') (unit session-7.scope)... Jan 23 18:59:09.844196 systemd[1]: Reloading... Jan 23 18:59:09.980909 kubelet[2446]: E0123 18:59:09.980777 2446 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:10.211845 zram_generator::config[2788]: No configuration found. Jan 23 18:59:10.797819 systemd[1]: Reloading finished in 952 ms. Jan 23 18:59:10.883882 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:59:10.917713 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:59:10.918717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:10.918882 systemd[1]: kubelet.service: Consumed 4.522s CPU time, 134.5M memory peak. Jan 23 18:59:10.926960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:59:11.510801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:11.542228 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:59:11.882975 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:59:11.882975 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:59:11.882975 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:59:11.882975 kubelet[2823]: I0123 18:59:11.882528 2823 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:59:11.922974 kubelet[2823]: I0123 18:59:11.921181 2823 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:59:11.922974 kubelet[2823]: I0123 18:59:11.921823 2823 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:59:11.927868 kubelet[2823]: I0123 18:59:11.927765 2823 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:59:11.938913 kubelet[2823]: I0123 18:59:11.938665 2823 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:59:11.959223 kubelet[2823]: I0123 18:59:11.958936 2823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:59:11.995203 kubelet[2823]: I0123 18:59:11.993437 2823 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:59:12.035143 kubelet[2823]: I0123 18:59:12.033169 2823 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:59:12.039900 kubelet[2823]: I0123 18:59:12.039749 2823 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:59:12.040505 kubelet[2823]: I0123 18:59:12.039900 2823 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:59:12.052575 kubelet[2823]: I0123 18:59:12.052155 2823 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:59:12.052575 kubelet[2823]: I0123 18:59:12.052449 2823 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:59:12.052575 kubelet[2823]: I0123 18:59:12.052546 2823 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:59:12.053478 kubelet[2823]: I0123 18:59:12.053013 2823 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:59:12.053478 kubelet[2823]: I0123 18:59:12.053036 2823 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:59:12.053478 kubelet[2823]: I0123 18:59:12.053153 2823 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:59:12.053478 kubelet[2823]: I0123 18:59:12.053180 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:59:12.059942 kubelet[2823]: I0123 18:59:12.059807 2823 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:59:12.059942 kubelet[2823]: I0123 18:59:12.061483 2823 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:59:12.102864 kubelet[2823]: I0123 18:59:12.102427 2823 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:59:12.102864 kubelet[2823]: I0123 18:59:12.102576 2823 server.go:1289] "Started kubelet" Jan 23 18:59:12.105665 kubelet[2823]: I0123 18:59:12.104662 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:59:12.105665 kubelet[2823]: I0123 18:59:12.105158 2823 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:59:12.105665 kubelet[2823]: I0123 18:59:12.105385 2823 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:59:12.108620 kubelet[2823]: I0123 18:59:12.108494 2823 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:59:12.125388 kubelet[2823]: I0123 18:59:12.124765 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:59:12.128847 kubelet[2823]: I0123 18:59:12.128725 2823 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:59:12.211210 kubelet[2823]: I0123 18:59:12.208602 2823 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:59:12.211210 kubelet[2823]: E0123 18:59:12.208756 2823 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:59:12.217413 kubelet[2823]: I0123 18:59:12.215724 2823 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:59:12.217413 kubelet[2823]: I0123 18:59:12.215942 2823 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:59:12.238475 sudo[2844]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 18:59:12.239018 sudo[2844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 18:59:12.255163 kubelet[2823]: I0123 18:59:12.254532 2823 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:59:12.255163 kubelet[2823]: I0123 18:59:12.254790 2823 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:59:12.256795 kubelet[2823]: E0123 18:59:12.256515 2823 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:59:12.267204 kubelet[2823]: I0123 18:59:12.266821 2823 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:59:12.356173 kubelet[2823]: I0123 18:59:12.355983 2823 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:59:12.373965 kubelet[2823]: I0123 18:59:12.373628 2823 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:59:12.379521 kubelet[2823]: I0123 18:59:12.378039 2823 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:59:12.379521 kubelet[2823]: I0123 18:59:12.378162 2823 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:59:12.379521 kubelet[2823]: I0123 18:59:12.378178 2823 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:59:12.379521 kubelet[2823]: E0123 18:59:12.378583 2823 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:59:12.482665 kubelet[2823]: E0123 18:59:12.482415 2823 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.504039 2823 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505446 2823 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505485 2823 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505766 2823 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505782 2823 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505809 2823 policy_none.go:49] "None policy: Start" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505823 2823 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.505837 2823 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:59:12.506504 kubelet[2823]: I0123 18:59:12.506032 2823 state_mem.go:75] "Updated machine memory state" Jan 23 18:59:12.539380 kubelet[2823]: E0123 18:59:12.538887 2823 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:59:12.539724 kubelet[2823]: I0123 18:59:12.539465 2823 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:59:12.539724 kubelet[2823]: I0123 18:59:12.539480 2823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:59:12.540144 kubelet[2823]: I0123 18:59:12.539963 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:59:12.547732 kubelet[2823]: E0123 18:59:12.547360 2823 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:59:12.684804 kubelet[2823]: I0123 18:59:12.684764 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:59:12.685820 kubelet[2823]: I0123 18:59:12.685668 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:59:12.685968 kubelet[2823]: I0123 18:59:12.685420 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:12.719011 kubelet[2823]: I0123 18:59:12.715500 2823 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:59:12.722477 kubelet[2823]: I0123 18:59:12.722168 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15cefdf50543f0acb1141bef276aa675-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15cefdf50543f0acb1141bef276aa675\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:59:12.722477 kubelet[2823]: I0123 18:59:12.722467 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15cefdf50543f0acb1141bef276aa675-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15cefdf50543f0acb1141bef276aa675\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:59:12.722728 kubelet[2823]: I0123 18:59:12.722501 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15cefdf50543f0acb1141bef276aa675-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15cefdf50543f0acb1141bef276aa675\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:59:12.722728 kubelet[2823]: I0123 18:59:12.722527 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:12.722728 kubelet[2823]: I0123 18:59:12.722554 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:12.722728 kubelet[2823]: I0123 18:59:12.722581 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:12.722728 kubelet[2823]: I0123 18:59:12.722604 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:59:12.723013 kubelet[2823]: I0123 18:59:12.722625 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:12.723013 kubelet[2823]: I0123 18:59:12.722645 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:59:12.746523 kubelet[2823]: E0123 18:59:12.746038 2823 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 23 18:59:12.782448 kubelet[2823]: I0123 18:59:12.776200 2823 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 18:59:12.782448 kubelet[2823]: I0123 18:59:12.776441 2823 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:59:13.056957 kubelet[2823]: E0123 18:59:13.056739 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:13.060177 kubelet[2823]: E0123 18:59:13.057598 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:13.060177 kubelet[2823]: E0123 18:59:13.057723 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:13.075400 kubelet[2823]: I0123 18:59:13.075167 2823 apiserver.go:52] "Watching apiserver" Jan 23 18:59:13.120027 kubelet[2823]: I0123 18:59:13.119798 2823 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:59:13.267713 sudo[2844]: pam_unix(sudo:session): session closed for user root Jan 23 18:59:13.459432 kubelet[2823]: E0123 18:59:13.458745 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:13.460994 kubelet[2823]: E0123 18:59:13.460754 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:13.460994 kubelet[2823]: E0123 18:59:13.461039 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:13.516012 kubelet[2823]: I0123 18:59:13.513992 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5139698259999999 podStartE2EDuration="1.513969826s" podCreationTimestamp="2026-01-23 18:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:13.510068476 +0000 UTC m=+1.908473086" watchObservedRunningTime="2026-01-23 18:59:13.513969826 +0000 UTC m=+1.912374435" Jan 23 18:59:13.616861 kubelet[2823]: I0123 18:59:13.616663 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.616642036 podStartE2EDuration="1.616642036s" podCreationTimestamp="2026-01-23 18:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:13.572543618 +0000 UTC m=+1.970948227" watchObservedRunningTime="2026-01-23 18:59:13.616642036 +0000 UTC m=+2.015046656" Jan 23 18:59:14.193459 kubelet[2823]: I0123 18:59:14.191591 2823 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:59:14.194193 containerd[1549]: time="2026-01-23T18:59:14.193825176Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:59:14.194957 kubelet[2823]: I0123 18:59:14.194232 2823 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:59:14.474797 kubelet[2823]: E0123 18:59:14.471593 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:14.475737 kubelet[2823]: E0123 18:59:14.475710 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:15.223199 systemd[1]: Created slice kubepods-besteffort-pod3ada3fef_28b4_4d69_b385_1cb050dd4d26.slice - libcontainer container kubepods-besteffort-pod3ada3fef_28b4_4d69_b385_1cb050dd4d26.slice. Jan 23 18:59:15.260396 systemd[1]: Created slice kubepods-burstable-pod91939f94_c884_4c8c_a9cd_81e863fc3bd2.slice - libcontainer container kubepods-burstable-pod91939f94_c884_4c8c_a9cd_81e863fc3bd2.slice. Jan 23 18:59:15.286404 kubelet[2823]: I0123 18:59:15.283691 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-kernel\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.286404 kubelet[2823]: I0123 18:59:15.284699 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ada3fef-28b4-4d69-b385-1cb050dd4d26-xtables-lock\") pod \"kube-proxy-pk9tm\" (UID: \"3ada3fef-28b4-4d69-b385-1cb050dd4d26\") " pod="kube-system/kube-proxy-pk9tm" Jan 23 18:59:15.286404 kubelet[2823]: I0123 18:59:15.284733 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-bpf-maps\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.286404 kubelet[2823]: I0123 18:59:15.284761 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-cgroup\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.286404 kubelet[2823]: I0123 18:59:15.285094 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-xtables-lock\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.286404 kubelet[2823]: I0123 18:59:15.285211 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj6q4\" (UniqueName: \"kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-kube-api-access-zj6q4\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.287568 systemd[1]: Created slice kubepods-besteffort-podb78d5c71_6568_4895_bbae_9248ea64de26.slice - libcontainer container kubepods-besteffort-podb78d5c71_6568_4895_bbae_9248ea64de26.slice. Jan 23 18:59:15.289978 kubelet[2823]: I0123 18:59:15.289384 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-run\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.289978 kubelet[2823]: I0123 18:59:15.289452 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-etc-cni-netd\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.289978 kubelet[2823]: I0123 18:59:15.289498 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-net\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.289978 kubelet[2823]: I0123 18:59:15.289519 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hubble-tls\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.289978 kubelet[2823]: I0123 18:59:15.289547 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ada3fef-28b4-4d69-b385-1cb050dd4d26-kube-proxy\") pod \"kube-proxy-pk9tm\" (UID: \"3ada3fef-28b4-4d69-b385-1cb050dd4d26\") " pod="kube-system/kube-proxy-pk9tm" Jan 23 18:59:15.289978 kubelet[2823]: I0123 18:59:15.289567 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hostproc\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.290563 kubelet[2823]: I0123 18:59:15.289592 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cni-path\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.290563 kubelet[2823]: I0123 18:59:15.289618 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-lib-modules\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.290563 kubelet[2823]: I0123 18:59:15.289642 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ada3fef-28b4-4d69-b385-1cb050dd4d26-lib-modules\") pod \"kube-proxy-pk9tm\" (UID: \"3ada3fef-28b4-4d69-b385-1cb050dd4d26\") " pod="kube-system/kube-proxy-pk9tm" Jan 23 18:59:15.290563 kubelet[2823]: I0123 18:59:15.289669 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dd85\" (UniqueName: \"kubernetes.io/projected/3ada3fef-28b4-4d69-b385-1cb050dd4d26-kube-api-access-6dd85\") pod \"kube-proxy-pk9tm\" (UID: \"3ada3fef-28b4-4d69-b385-1cb050dd4d26\") " pod="kube-system/kube-proxy-pk9tm" Jan 23 18:59:15.290563 kubelet[2823]: I0123 18:59:15.289689 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91939f94-c884-4c8c-a9cd-81e863fc3bd2-clustermesh-secrets\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.290751 kubelet[2823]: I0123 18:59:15.289708 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-config-path\") pod \"cilium-zspjq\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " pod="kube-system/cilium-zspjq" Jan 23 18:59:15.391422 kubelet[2823]: I0123 18:59:15.390751 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b78d5c71-6568-4895-bbae-9248ea64de26-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bmvgs\" (UID: \"b78d5c71-6568-4895-bbae-9248ea64de26\") " pod="kube-system/cilium-operator-6c4d7847fc-bmvgs" Jan 23 18:59:15.391422 kubelet[2823]: I0123 18:59:15.390799 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9l2b\" (UniqueName: \"kubernetes.io/projected/b78d5c71-6568-4895-bbae-9248ea64de26-kube-api-access-g9l2b\") pod \"cilium-operator-6c4d7847fc-bmvgs\" (UID: \"b78d5c71-6568-4895-bbae-9248ea64de26\") " pod="kube-system/cilium-operator-6c4d7847fc-bmvgs" Jan 23 18:59:15.848797 kubelet[2823]: E0123 18:59:15.846804 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:15.853008 containerd[1549]: time="2026-01-23T18:59:15.852710538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pk9tm,Uid:3ada3fef-28b4-4d69-b385-1cb050dd4d26,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:15.881096 kubelet[2823]: E0123 18:59:15.879885 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:15.881805 containerd[1549]: time="2026-01-23T18:59:15.881758193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zspjq,Uid:91939f94-c884-4c8c-a9cd-81e863fc3bd2,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:15.897658 kubelet[2823]: E0123 18:59:15.897498 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:15.900589 containerd[1549]: time="2026-01-23T18:59:15.900035741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bmvgs,Uid:b78d5c71-6568-4895-bbae-9248ea64de26,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:16.158400 containerd[1549]: time="2026-01-23T18:59:16.156463860Z" level=info msg="connecting to shim ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a" address="unix:///run/containerd/s/2823b7ebafc37219fdd4e0556954481a438a243babafd4b59bc4b85e42a5b34d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:16.160953 containerd[1549]: time="2026-01-23T18:59:16.160535730Z" level=info msg="connecting to shim d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0" address="unix:///run/containerd/s/a116842bace4fcfb72943387d6e041b1b1b38148ba8366505421aaef6bb45755" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:16.172777 containerd[1549]: time="2026-01-23T18:59:16.172675022Z" level=info msg="connecting to shim 2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb" address="unix:///run/containerd/s/b4fa35cc1933d19c99f9264d972c1138f3ff63341ee03dd91d7dc925974af8b4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:16.317887 systemd[1]: Started cri-containerd-d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0.scope - libcontainer container d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0. Jan 23 18:59:16.347613 systemd[1]: Started cri-containerd-ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a.scope - libcontainer container ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a. Jan 23 18:59:16.368644 systemd[1]: Started cri-containerd-2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb.scope - libcontainer container 2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb. Jan 23 18:59:16.520542 containerd[1549]: time="2026-01-23T18:59:16.520109160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pk9tm,Uid:3ada3fef-28b4-4d69-b385-1cb050dd4d26,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a\"" Jan 23 18:59:16.526078 kubelet[2823]: E0123 18:59:16.526039 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:16.566075 containerd[1549]: time="2026-01-23T18:59:16.564916899Z" level=info msg="CreateContainer within sandbox \"ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:59:16.621224 containerd[1549]: time="2026-01-23T18:59:16.620941345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bmvgs,Uid:b78d5c71-6568-4895-bbae-9248ea64de26,Namespace:kube-system,Attempt:0,} returns sandbox id \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\"" Jan 23 18:59:16.623724 containerd[1549]: time="2026-01-23T18:59:16.623509840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zspjq,Uid:91939f94-c884-4c8c-a9cd-81e863fc3bd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\"" Jan 23 18:59:16.624989 kubelet[2823]: E0123 18:59:16.624861 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:16.639881 kubelet[2823]: E0123 18:59:16.639829 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:16.646690 containerd[1549]: time="2026-01-23T18:59:16.646498646Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 18:59:16.667749 containerd[1549]: time="2026-01-23T18:59:16.667065271Z" level=info msg="Container 5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:16.723652 containerd[1549]: time="2026-01-23T18:59:16.722638757Z" level=info msg="CreateContainer within sandbox \"ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37\"" Jan 23 18:59:16.723652 containerd[1549]: time="2026-01-23T18:59:16.726060830Z" level=info msg="StartContainer for \"5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37\"" Jan 23 18:59:16.747405 containerd[1549]: time="2026-01-23T18:59:16.746519874Z" level=info msg="connecting to shim 5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37" address="unix:///run/containerd/s/2823b7ebafc37219fdd4e0556954481a438a243babafd4b59bc4b85e42a5b34d" protocol=ttrpc version=3 Jan 23 18:59:16.833411 systemd[1]: Started cri-containerd-5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37.scope - libcontainer container 5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37. Jan 23 18:59:17.055985 containerd[1549]: time="2026-01-23T18:59:17.055858636Z" level=info msg="StartContainer for \"5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37\" returns successfully" Jan 23 18:59:17.789085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389246816.mount: Deactivated successfully. Jan 23 18:59:17.894989 kubelet[2823]: E0123 18:59:17.894778 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:20.399002 kubelet[2823]: E0123 18:59:20.397055 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.936s" Jan 23 18:59:20.472571 kubelet[2823]: E0123 18:59:20.456028 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:20.952854 kubelet[2823]: I0123 18:59:20.951593 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pk9tm" podStartSLOduration=5.951571249 podStartE2EDuration="5.951571249s" podCreationTimestamp="2026-01-23 18:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:20.822657256 +0000 UTC m=+9.221061867" watchObservedRunningTime="2026-01-23 18:59:20.951571249 +0000 UTC m=+9.349975858" Jan 23 18:59:21.602589 kubelet[2823]: E0123 18:59:21.598165 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:21.625478 kubelet[2823]: E0123 18:59:21.625064 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:22.135874 kubelet[2823]: E0123 18:59:22.128543 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:22.479775 kubelet[2823]: E0123 18:59:22.471970 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:22.490490 kubelet[2823]: E0123 18:59:22.490068 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:23.006445 kubelet[2823]: E0123 18:59:23.006019 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:23.626972 kubelet[2823]: E0123 18:59:23.625897 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:28.364687 containerd[1549]: time="2026-01-23T18:59:28.363872992Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:28.379218 containerd[1549]: time="2026-01-23T18:59:28.379012906Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 18:59:28.383187 containerd[1549]: time="2026-01-23T18:59:28.383147897Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:28.411724 containerd[1549]: time="2026-01-23T18:59:28.406171587Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 11.75961853s" Jan 23 18:59:28.411724 containerd[1549]: time="2026-01-23T18:59:28.406786286Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 18:59:28.417510 containerd[1549]: time="2026-01-23T18:59:28.416219340Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 18:59:28.430895 containerd[1549]: time="2026-01-23T18:59:28.430724993Z" level=info msg="CreateContainer within sandbox \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 18:59:28.482073 containerd[1549]: time="2026-01-23T18:59:28.481822910Z" level=info msg="Container f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:28.538934 containerd[1549]: time="2026-01-23T18:59:28.538612928Z" level=info msg="CreateContainer within sandbox \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\"" Jan 23 18:59:28.541172 containerd[1549]: time="2026-01-23T18:59:28.541009096Z" level=info msg="StartContainer for \"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\"" Jan 23 18:59:28.553180 containerd[1549]: time="2026-01-23T18:59:28.552609907Z" level=info msg="connecting to shim f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930" address="unix:///run/containerd/s/a116842bace4fcfb72943387d6e041b1b1b38148ba8366505421aaef6bb45755" protocol=ttrpc version=3 Jan 23 18:59:28.650573 systemd[1]: Started cri-containerd-f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930.scope - libcontainer container f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930. Jan 23 18:59:28.975556 containerd[1549]: time="2026-01-23T18:59:28.974983764Z" level=info msg="StartContainer for \"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\" returns successfully" Jan 23 18:59:30.058616 kubelet[2823]: E0123 18:59:30.058175 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:31.084664 kubelet[2823]: E0123 18:59:31.081622 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:51.389841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802720055.mount: Deactivated successfully. Jan 23 19:00:11.354088 containerd[1549]: time="2026-01-23T19:00:11.353894541Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:11.358214 containerd[1549]: time="2026-01-23T19:00:11.357936762Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 19:00:11.360019 containerd[1549]: time="2026-01-23T19:00:11.359924355Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:11.362502 containerd[1549]: time="2026-01-23T19:00:11.362388358Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 42.945701183s" Jan 23 19:00:11.362502 containerd[1549]: time="2026-01-23T19:00:11.362481863Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 19:00:11.402734 containerd[1549]: time="2026-01-23T19:00:11.402205196Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:00:11.436714 containerd[1549]: time="2026-01-23T19:00:11.436559317Z" level=info msg="Container 2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:11.446664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643592001.mount: Deactivated successfully. Jan 23 19:00:11.462920 containerd[1549]: time="2026-01-23T19:00:11.462646672Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\"" Jan 23 19:00:11.464381 containerd[1549]: time="2026-01-23T19:00:11.464351700Z" level=info msg="StartContainer for \"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\"" Jan 23 19:00:11.466729 containerd[1549]: time="2026-01-23T19:00:11.466491347Z" level=info msg="connecting to shim 2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895" address="unix:///run/containerd/s/b4fa35cc1933d19c99f9264d972c1138f3ff63341ee03dd91d7dc925974af8b4" protocol=ttrpc version=3 Jan 23 19:00:11.679801 systemd[1]: Started cri-containerd-2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895.scope - libcontainer container 2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895. Jan 23 19:00:11.939423 containerd[1549]: time="2026-01-23T19:00:11.939209589Z" level=info msg="StartContainer for \"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\" returns successfully" Jan 23 19:00:11.974877 systemd[1]: cri-containerd-2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895.scope: Deactivated successfully. Jan 23 19:00:11.979375 containerd[1549]: time="2026-01-23T19:00:11.979200947Z" level=info msg="received container exit event container_id:\"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\" id:\"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\" pid:3296 exited_at:{seconds:1769194811 nanos:975948916}" Jan 23 19:00:12.119963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895-rootfs.mount: Deactivated successfully. Jan 23 19:00:12.712560 kubelet[2823]: E0123 19:00:12.710975 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:12.745529 containerd[1549]: time="2026-01-23T19:00:12.743203085Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:00:12.812718 containerd[1549]: time="2026-01-23T19:00:12.812394201Z" level=info msg="Container bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:12.825760 kubelet[2823]: I0123 19:00:12.825643 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bmvgs" podStartSLOduration=46.048700496 podStartE2EDuration="57.825623944s" podCreationTimestamp="2026-01-23 18:59:15 +0000 UTC" firstStartedPulling="2026-01-23 18:59:16.638697923 +0000 UTC m=+5.037102532" lastFinishedPulling="2026-01-23 18:59:28.41562136 +0000 UTC m=+16.814025980" observedRunningTime="2026-01-23 18:59:30.269211718 +0000 UTC m=+18.667616327" watchObservedRunningTime="2026-01-23 19:00:12.825623944 +0000 UTC m=+61.224028565" Jan 23 19:00:12.845766 containerd[1549]: time="2026-01-23T19:00:12.845559001Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\"" Jan 23 19:00:12.849130 containerd[1549]: time="2026-01-23T19:00:12.849088339Z" level=info msg="StartContainer for \"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\"" Jan 23 19:00:12.855313 containerd[1549]: time="2026-01-23T19:00:12.855032263Z" level=info msg="connecting to shim bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3" address="unix:///run/containerd/s/b4fa35cc1933d19c99f9264d972c1138f3ff63341ee03dd91d7dc925974af8b4" protocol=ttrpc version=3 Jan 23 19:00:12.915770 systemd[1]: Started cri-containerd-bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3.scope - libcontainer container bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3. Jan 23 19:00:13.023497 containerd[1549]: time="2026-01-23T19:00:13.023111348Z" level=info msg="StartContainer for \"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\" returns successfully" Jan 23 19:00:13.069183 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:00:13.069767 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:00:13.071691 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:00:13.074668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:00:13.078197 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:00:13.080806 systemd[1]: cri-containerd-bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3.scope: Deactivated successfully. Jan 23 19:00:13.081999 containerd[1549]: time="2026-01-23T19:00:13.081144463Z" level=info msg="received container exit event container_id:\"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\" id:\"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\" pid:3342 exited_at:{seconds:1769194813 nanos:80664763}" Jan 23 19:00:13.214146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:00:13.727762 kubelet[2823]: E0123 19:00:13.727567 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:13.756538 containerd[1549]: time="2026-01-23T19:00:13.756206294Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:00:13.794208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3-rootfs.mount: Deactivated successfully. Jan 23 19:00:13.817029 containerd[1549]: time="2026-01-23T19:00:13.813131023Z" level=info msg="Container d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:13.834000 containerd[1549]: time="2026-01-23T19:00:13.833753787Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\"" Jan 23 19:00:13.835527 containerd[1549]: time="2026-01-23T19:00:13.835431337Z" level=info msg="StartContainer for \"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\"" Jan 23 19:00:13.840069 containerd[1549]: time="2026-01-23T19:00:13.838351993Z" level=info msg="connecting to shim d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c" address="unix:///run/containerd/s/b4fa35cc1933d19c99f9264d972c1138f3ff63341ee03dd91d7dc925974af8b4" protocol=ttrpc version=3 Jan 23 19:00:13.909192 systemd[1]: Started cri-containerd-d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c.scope - libcontainer container d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c. Jan 23 19:00:14.113604 containerd[1549]: time="2026-01-23T19:00:14.111082773Z" level=info msg="StartContainer for \"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\" returns successfully" Jan 23 19:00:14.115085 systemd[1]: cri-containerd-d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c.scope: Deactivated successfully. Jan 23 19:00:14.122835 containerd[1549]: time="2026-01-23T19:00:14.122656374Z" level=info msg="received container exit event container_id:\"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\" id:\"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\" pid:3387 exited_at:{seconds:1769194814 nanos:122081590}" Jan 23 19:00:14.195392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c-rootfs.mount: Deactivated successfully. Jan 23 19:00:14.748418 kubelet[2823]: E0123 19:00:14.747521 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:14.762653 containerd[1549]: time="2026-01-23T19:00:14.762497622Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:00:14.809776 containerd[1549]: time="2026-01-23T19:00:14.808358812Z" level=info msg="Container a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:14.829098 containerd[1549]: time="2026-01-23T19:00:14.826582197Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\"" Jan 23 19:00:14.829098 containerd[1549]: time="2026-01-23T19:00:14.828541890Z" level=info msg="StartContainer for \"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\"" Jan 23 19:00:14.831488 containerd[1549]: time="2026-01-23T19:00:14.830026993Z" level=info msg="connecting to shim a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139" address="unix:///run/containerd/s/b4fa35cc1933d19c99f9264d972c1138f3ff63341ee03dd91d7dc925974af8b4" protocol=ttrpc version=3 Jan 23 19:00:14.885705 systemd[1]: Started cri-containerd-a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139.scope - libcontainer container a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139. Jan 23 19:00:15.025336 systemd[1]: cri-containerd-a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139.scope: Deactivated successfully. Jan 23 19:00:15.044006 containerd[1549]: time="2026-01-23T19:00:15.042754890Z" level=info msg="received container exit event container_id:\"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\" id:\"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\" pid:3429 exited_at:{seconds:1769194815 nanos:30805848}" Jan 23 19:00:15.047393 containerd[1549]: time="2026-01-23T19:00:15.047063950Z" level=info msg="StartContainer for \"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\" returns successfully" Jan 23 19:00:15.154770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139-rootfs.mount: Deactivated successfully. Jan 23 19:00:15.790073 kubelet[2823]: E0123 19:00:15.789551 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:15.811222 containerd[1549]: time="2026-01-23T19:00:15.811166314Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:00:15.898810 containerd[1549]: time="2026-01-23T19:00:15.898682731Z" level=info msg="Container a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:15.943049 containerd[1549]: time="2026-01-23T19:00:15.940932398Z" level=info msg="CreateContainer within sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\"" Jan 23 19:00:15.943049 containerd[1549]: time="2026-01-23T19:00:15.942852645Z" level=info msg="StartContainer for \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\"" Jan 23 19:00:15.949166 containerd[1549]: time="2026-01-23T19:00:15.949034654Z" level=info msg="connecting to shim a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521" address="unix:///run/containerd/s/b4fa35cc1933d19c99f9264d972c1138f3ff63341ee03dd91d7dc925974af8b4" protocol=ttrpc version=3 Jan 23 19:00:16.001676 systemd[1]: Started cri-containerd-a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521.scope - libcontainer container a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521. Jan 23 19:00:16.245520 containerd[1549]: time="2026-01-23T19:00:16.245357971Z" level=info msg="StartContainer for \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" returns successfully" Jan 23 19:00:16.751394 kubelet[2823]: I0123 19:00:16.750469 2823 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:00:17.148088 kubelet[2823]: E0123 19:00:17.147209 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:17.245474 systemd[1]: Created slice kubepods-burstable-podf0a3563d_f3ba_4661_95da_921f0820b95f.slice - libcontainer container kubepods-burstable-podf0a3563d_f3ba_4661_95da_921f0820b95f.slice. Jan 23 19:00:17.262114 kubelet[2823]: I0123 19:00:17.261485 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jb8p\" (UniqueName: \"kubernetes.io/projected/f0a3563d-f3ba-4661-95da-921f0820b95f-kube-api-access-4jb8p\") pod \"coredns-674b8bbfcf-npjcp\" (UID: \"f0a3563d-f3ba-4661-95da-921f0820b95f\") " pod="kube-system/coredns-674b8bbfcf-npjcp" Jan 23 19:00:17.262114 kubelet[2823]: I0123 19:00:17.261524 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0a3563d-f3ba-4661-95da-921f0820b95f-config-volume\") pod \"coredns-674b8bbfcf-npjcp\" (UID: \"f0a3563d-f3ba-4661-95da-921f0820b95f\") " pod="kube-system/coredns-674b8bbfcf-npjcp" Jan 23 19:00:17.279813 systemd[1]: Created slice kubepods-burstable-pod9bcb343d_1efc_4759_af88_0412ab89ec06.slice - libcontainer container kubepods-burstable-pod9bcb343d_1efc_4759_af88_0412ab89ec06.slice. Jan 23 19:00:17.285098 kubelet[2823]: I0123 19:00:17.284976 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zspjq" podStartSLOduration=7.563131772 podStartE2EDuration="1m2.284961521s" podCreationTimestamp="2026-01-23 18:59:15 +0000 UTC" firstStartedPulling="2026-01-23 18:59:16.645094269 +0000 UTC m=+5.043498879" lastFinishedPulling="2026-01-23 19:00:11.366924018 +0000 UTC m=+59.765328628" observedRunningTime="2026-01-23 19:00:17.275509909 +0000 UTC m=+65.673914529" watchObservedRunningTime="2026-01-23 19:00:17.284961521 +0000 UTC m=+65.683366131" Jan 23 19:00:17.367587 kubelet[2823]: I0123 19:00:17.367469 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ll2c\" (UniqueName: \"kubernetes.io/projected/9bcb343d-1efc-4759-af88-0412ab89ec06-kube-api-access-7ll2c\") pod \"coredns-674b8bbfcf-px9nx\" (UID: \"9bcb343d-1efc-4759-af88-0412ab89ec06\") " pod="kube-system/coredns-674b8bbfcf-px9nx" Jan 23 19:00:17.368007 kubelet[2823]: I0123 19:00:17.367789 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bcb343d-1efc-4759-af88-0412ab89ec06-config-volume\") pod \"coredns-674b8bbfcf-px9nx\" (UID: \"9bcb343d-1efc-4759-af88-0412ab89ec06\") " pod="kube-system/coredns-674b8bbfcf-px9nx" Jan 23 19:00:17.563540 kubelet[2823]: E0123 19:00:17.562167 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:17.565429 containerd[1549]: time="2026-01-23T19:00:17.565384830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-npjcp,Uid:f0a3563d-f3ba-4661-95da-921f0820b95f,Namespace:kube-system,Attempt:0,}" Jan 23 19:00:17.590505 kubelet[2823]: E0123 19:00:17.590071 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:17.592783 containerd[1549]: time="2026-01-23T19:00:17.590783135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-px9nx,Uid:9bcb343d-1efc-4759-af88-0412ab89ec06,Namespace:kube-system,Attempt:0,}" Jan 23 19:00:18.172074 kubelet[2823]: E0123 19:00:18.171087 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:19.194439 kubelet[2823]: E0123 19:00:19.194167 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:20.212890 systemd-networkd[1468]: cilium_host: Link UP Jan 23 19:00:20.213893 systemd-networkd[1468]: cilium_net: Link UP Jan 23 19:00:20.215649 systemd-networkd[1468]: cilium_net: Gained carrier Jan 23 19:00:20.216033 systemd-networkd[1468]: cilium_host: Gained carrier Jan 23 19:00:20.715150 systemd-networkd[1468]: cilium_host: Gained IPv6LL Jan 23 19:00:20.845776 systemd-networkd[1468]: cilium_vxlan: Link UP Jan 23 19:00:20.845786 systemd-networkd[1468]: cilium_vxlan: Gained carrier Jan 23 19:00:20.980159 systemd-networkd[1468]: cilium_net: Gained IPv6LL Jan 23 19:00:21.611782 kernel: NET: Registered PF_ALG protocol family Jan 23 19:00:22.070073 systemd-networkd[1468]: cilium_vxlan: Gained IPv6LL Jan 23 19:00:22.330613 kubelet[2823]: E0123 19:00:22.330222 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:23.981442 systemd-networkd[1468]: lxc_health: Link UP Jan 23 19:00:23.994580 systemd-networkd[1468]: lxc_health: Gained carrier Jan 23 19:00:24.257705 systemd-networkd[1468]: lxc875c4fad52b9: Link UP Jan 23 19:00:24.286525 kernel: eth0: renamed from tmp223f1 Jan 23 19:00:24.310714 systemd-networkd[1468]: lxcd89749807d4e: Link UP Jan 23 19:00:24.328438 systemd-networkd[1468]: lxc875c4fad52b9: Gained carrier Jan 23 19:00:24.333902 kernel: eth0: renamed from tmp42b61 Jan 23 19:00:24.366484 systemd-networkd[1468]: lxcd89749807d4e: Gained carrier Jan 23 19:00:25.206365 systemd-networkd[1468]: lxc_health: Gained IPv6LL Jan 23 19:00:25.650714 systemd-networkd[1468]: lxc875c4fad52b9: Gained IPv6LL Jan 23 19:00:25.885013 kubelet[2823]: E0123 19:00:25.884814 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:26.290191 kubelet[2823]: E0123 19:00:26.281918 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:26.356066 systemd-networkd[1468]: lxcd89749807d4e: Gained IPv6LL Jan 23 19:00:28.392447 kubelet[2823]: E0123 19:00:28.388748 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:31.856178 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 23 19:00:31.861376 sshd[1754]: Connection closed by 10.0.0.1 port 40704 Jan 23 19:00:31.863394 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:31.880134 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:40704.service: Deactivated successfully. Jan 23 19:00:31.886191 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:00:31.886796 systemd[1]: session-7.scope: Consumed 18.107s CPU time, 228.6M memory peak. Jan 23 19:00:31.894848 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:00:31.908437 systemd-logind[1539]: Removed session 7. Jan 23 19:00:33.187463 containerd[1549]: time="2026-01-23T19:00:33.180904009Z" level=info msg="connecting to shim 223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558" address="unix:///run/containerd/s/1a195da0712090e845ff161374b68729488d075c2f417309bfa0547f4114c8cc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:33.269010 containerd[1549]: time="2026-01-23T19:00:33.268901989Z" level=info msg="connecting to shim 42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f" address="unix:///run/containerd/s/576ba57a2d515ced4cbcfe989381e6bddd7edbedb327cc81e452ccf99a37144c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:33.379499 systemd[1]: Started cri-containerd-223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558.scope - libcontainer container 223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558. Jan 23 19:00:33.383387 kubelet[2823]: E0123 19:00:33.380078 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:33.392755 systemd[1]: Started cri-containerd-42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f.scope - libcontainer container 42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f. Jan 23 19:00:33.520846 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:00:33.540837 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:00:33.724171 containerd[1549]: time="2026-01-23T19:00:33.722832595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-px9nx,Uid:9bcb343d-1efc-4759-af88-0412ab89ec06,Namespace:kube-system,Attempt:0,} returns sandbox id \"42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f\"" Jan 23 19:00:33.741471 kubelet[2823]: E0123 19:00:33.735686 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:33.772482 containerd[1549]: time="2026-01-23T19:00:33.766727670Z" level=info msg="CreateContainer within sandbox \"42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:00:33.843880 containerd[1549]: time="2026-01-23T19:00:33.843824448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-npjcp,Uid:f0a3563d-f3ba-4661-95da-921f0820b95f,Namespace:kube-system,Attempt:0,} returns sandbox id \"223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558\"" Jan 23 19:00:33.858177 kubelet[2823]: E0123 19:00:33.856121 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:33.886460 containerd[1549]: time="2026-01-23T19:00:33.885169498Z" level=info msg="CreateContainer within sandbox \"223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:00:33.940460 containerd[1549]: time="2026-01-23T19:00:33.934227514Z" level=info msg="Container efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:33.992173 containerd[1549]: time="2026-01-23T19:00:33.990147190Z" level=info msg="Container 79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:34.017111 containerd[1549]: time="2026-01-23T19:00:34.016876476Z" level=info msg="CreateContainer within sandbox \"42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625\"" Jan 23 19:00:34.026853 containerd[1549]: time="2026-01-23T19:00:34.022488209Z" level=info msg="StartContainer for \"efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625\"" Jan 23 19:00:34.037022 containerd[1549]: time="2026-01-23T19:00:34.036677884Z" level=info msg="connecting to shim efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625" address="unix:///run/containerd/s/576ba57a2d515ced4cbcfe989381e6bddd7edbedb327cc81e452ccf99a37144c" protocol=ttrpc version=3 Jan 23 19:00:34.038796 containerd[1549]: time="2026-01-23T19:00:34.038640961Z" level=info msg="CreateContainer within sandbox \"223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04\"" Jan 23 19:00:34.045736 containerd[1549]: time="2026-01-23T19:00:34.039631472Z" level=info msg="StartContainer for \"79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04\"" Jan 23 19:00:34.045736 containerd[1549]: time="2026-01-23T19:00:34.041517795Z" level=info msg="connecting to shim 79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04" address="unix:///run/containerd/s/1a195da0712090e845ff161374b68729488d075c2f417309bfa0547f4114c8cc" protocol=ttrpc version=3 Jan 23 19:00:34.136971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174815731.mount: Deactivated successfully. Jan 23 19:00:34.201012 systemd[1]: Started cri-containerd-efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625.scope - libcontainer container efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625. Jan 23 19:00:34.248840 systemd[1]: Started cri-containerd-79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04.scope - libcontainer container 79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04. Jan 23 19:00:34.660597 containerd[1549]: time="2026-01-23T19:00:34.660378765Z" level=info msg="StartContainer for \"efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625\" returns successfully" Jan 23 19:00:34.680215 containerd[1549]: time="2026-01-23T19:00:34.679814044Z" level=info msg="StartContainer for \"79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04\" returns successfully" Jan 23 19:00:35.574969 kubelet[2823]: E0123 19:00:35.573640 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:35.602922 kubelet[2823]: E0123 19:00:35.602790 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:35.659452 kubelet[2823]: I0123 19:00:35.656707 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-px9nx" podStartSLOduration=80.656686249 podStartE2EDuration="1m20.656686249s" podCreationTimestamp="2026-01-23 18:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:00:35.648705455 +0000 UTC m=+84.047110065" watchObservedRunningTime="2026-01-23 19:00:35.656686249 +0000 UTC m=+84.055090859" Jan 23 19:00:35.891033 kubelet[2823]: I0123 19:00:35.889616 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-npjcp" podStartSLOduration=80.889592936 podStartE2EDuration="1m20.889592936s" podCreationTimestamp="2026-01-23 18:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:00:35.827767472 +0000 UTC m=+84.226172112" watchObservedRunningTime="2026-01-23 19:00:35.889592936 +0000 UTC m=+84.287997586" Jan 23 19:00:36.609196 kubelet[2823]: E0123 19:00:36.596339 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:36.609196 kubelet[2823]: E0123 19:00:36.596942 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:37.614676 kubelet[2823]: E0123 19:00:37.614525 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:37.617903 kubelet[2823]: E0123 19:00:37.614892 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:38.624425 kubelet[2823]: E0123 19:00:38.624152 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:39.382540 kubelet[2823]: E0123 19:00:39.381812 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:49.389944 kubelet[2823]: E0123 19:00:49.389492 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:00:51.383499 kubelet[2823]: E0123 19:00:51.383007 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:01:51.402469 kubelet[2823]: E0123 19:01:51.397065 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:01:51.402469 kubelet[2823]: E0123 19:01:51.399165 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:01:53.396202 kubelet[2823]: E0123 19:01:53.393129 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:01:54.430216 kubelet[2823]: E0123 19:01:54.428493 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:01:56.406895 kubelet[2823]: E0123 19:01:56.393768 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:01:58.557475 kubelet[2823]: E0123 19:01:58.544989 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:09.388451 kubelet[2823]: E0123 19:02:09.386740 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:21.389500 kubelet[2823]: E0123 19:02:21.380404 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:52.744058 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:60686.service - OpenSSH per-connection server daemon (10.0.0.1:60686). Jan 23 19:02:53.296037 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 60686 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:02:53.318755 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:53.367953 systemd-logind[1539]: New session 8 of user core. Jan 23 19:02:53.377424 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:02:54.382170 sshd[4324]: Connection closed by 10.0.0.1 port 60686 Jan 23 19:02:54.383227 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:54.428009 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:60686.service: Deactivated successfully. Jan 23 19:02:54.461840 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:02:54.473955 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:02:54.486428 systemd-logind[1539]: Removed session 8. Jan 23 19:02:59.495589 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:52844.service - OpenSSH per-connection server daemon (10.0.0.1:52844). Jan 23 19:02:59.925503 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:02:59.930570 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:59.972420 systemd-logind[1539]: New session 9 of user core. Jan 23 19:02:59.997921 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:03:00.678008 sshd[4347]: Connection closed by 10.0.0.1 port 52844 Jan 23 19:03:00.679940 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:00.694922 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:52844.service: Deactivated successfully. Jan 23 19:03:00.704199 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:03:00.710946 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:03:00.719710 systemd-logind[1539]: Removed session 9. Jan 23 19:03:02.382031 kubelet[2823]: E0123 19:03:02.379919 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:05.381893 kubelet[2823]: E0123 19:03:05.380546 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:05.747206 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:59192.service - OpenSSH per-connection server daemon (10.0.0.1:59192). Jan 23 19:03:06.008923 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 59192 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:06.011791 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:06.056999 systemd-logind[1539]: New session 10 of user core. Jan 23 19:03:06.093701 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:03:06.585017 sshd[4364]: Connection closed by 10.0.0.1 port 59192 Jan 23 19:03:06.586753 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:06.595936 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:59192.service: Deactivated successfully. Jan 23 19:03:06.614478 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:03:06.617560 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:03:06.633342 systemd-logind[1539]: Removed session 10. Jan 23 19:03:11.384914 kubelet[2823]: E0123 19:03:11.382163 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:11.678021 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:59194.service - OpenSSH per-connection server daemon (10.0.0.1:59194). Jan 23 19:03:11.901097 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 59194 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:11.911798 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:11.948357 systemd-logind[1539]: New session 11 of user core. Jan 23 19:03:11.959003 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:03:12.561452 sshd[4383]: Connection closed by 10.0.0.1 port 59194 Jan 23 19:03:12.565729 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:12.598231 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:59194.service: Deactivated successfully. Jan 23 19:03:12.615544 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:03:12.631528 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:03:12.644055 systemd-logind[1539]: Removed session 11. Jan 23 19:03:17.658642 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:35372.service - OpenSSH per-connection server daemon (10.0.0.1:35372). Jan 23 19:03:17.942573 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 35372 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:17.949071 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:17.976171 systemd-logind[1539]: New session 12 of user core. Jan 23 19:03:17.995061 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:03:18.503593 sshd[4402]: Connection closed by 10.0.0.1 port 35372 Jan 23 19:03:18.506663 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:18.522128 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:35372.service: Deactivated successfully. Jan 23 19:03:18.536577 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:03:18.549852 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:03:18.557945 systemd-logind[1539]: Removed session 12. Jan 23 19:03:19.402507 kubelet[2823]: E0123 19:03:19.401009 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:23.385071 kubelet[2823]: E0123 19:03:23.381185 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:23.585366 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Jan 23 19:03:23.939797 sshd[4419]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:23.945451 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:24.024572 systemd-logind[1539]: New session 13 of user core. Jan 23 19:03:24.038393 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:03:24.764417 sshd[4422]: Connection closed by 10.0.0.1 port 35384 Jan 23 19:03:24.763610 sshd-session[4419]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:24.793884 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:03:24.812200 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:35384.service: Deactivated successfully. Jan 23 19:03:24.825617 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:03:24.846911 systemd-logind[1539]: Removed session 13. Jan 23 19:03:25.395128 kubelet[2823]: E0123 19:03:25.383902 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:26.385075 kubelet[2823]: E0123 19:03:26.382459 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:29.833942 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:54976.service - OpenSSH per-connection server daemon (10.0.0.1:54976). Jan 23 19:03:30.222889 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 54976 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:30.230014 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:30.275484 systemd-logind[1539]: New session 14 of user core. Jan 23 19:03:30.320440 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:03:31.241007 sshd[4440]: Connection closed by 10.0.0.1 port 54976 Jan 23 19:03:31.243409 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:31.277629 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:54976.service: Deactivated successfully. Jan 23 19:03:31.285984 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:03:31.291666 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:03:31.311921 systemd-logind[1539]: Removed session 14. Jan 23 19:03:36.323899 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:41042.service - OpenSSH per-connection server daemon (10.0.0.1:41042). Jan 23 19:03:36.573669 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 41042 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:36.583466 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:36.614092 systemd-logind[1539]: New session 15 of user core. Jan 23 19:03:36.630625 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:03:37.018207 sshd[4459]: Connection closed by 10.0.0.1 port 41042 Jan 23 19:03:37.019856 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:37.030900 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:41042.service: Deactivated successfully. Jan 23 19:03:37.039953 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:03:37.051051 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:03:37.060710 systemd-logind[1539]: Removed session 15. Jan 23 19:03:41.383372 kubelet[2823]: E0123 19:03:41.380161 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:42.159742 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:41058.service - OpenSSH per-connection server daemon (10.0.0.1:41058). Jan 23 19:03:42.557537 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 41058 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:42.561773 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:42.593161 systemd-logind[1539]: New session 16 of user core. Jan 23 19:03:42.648648 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:03:43.257692 sshd[4477]: Connection closed by 10.0.0.1 port 41058 Jan 23 19:03:43.257465 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:43.290645 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:41058.service: Deactivated successfully. Jan 23 19:03:43.303389 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:03:43.307042 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:03:43.355145 systemd-logind[1539]: Removed session 16. Jan 23 19:03:48.315443 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:50178.service - OpenSSH per-connection server daemon (10.0.0.1:50178). Jan 23 19:03:48.565907 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 50178 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:48.571590 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:48.611974 systemd-logind[1539]: New session 17 of user core. Jan 23 19:03:48.623770 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:03:49.091114 sshd[4496]: Connection closed by 10.0.0.1 port 50178 Jan 23 19:03:49.089606 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:49.108628 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:50178.service: Deactivated successfully. Jan 23 19:03:49.118931 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:03:49.127373 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:03:49.140667 systemd-logind[1539]: Removed session 17. Jan 23 19:03:53.443360 containerd[1549]: time="2026-01-23T19:03:53.360536084Z" level=warning msg="container event discarded" container=50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1 type=CONTAINER_CREATED_EVENT Jan 23 19:03:53.443360 containerd[1549]: time="2026-01-23T19:03:53.443047053Z" level=warning msg="container event discarded" container=50464a0cd5553e4fca4965be8847a9676d76a466cbe6be5aeeb554cddb2a31e1 type=CONTAINER_STARTED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535520788Z" level=warning msg="container event discarded" container=41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb type=CONTAINER_CREATED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535581192Z" level=warning msg="container event discarded" container=41f54dc0b778da6a95930633e8090c1427713c070227cb9f6797f697bcadf8fb type=CONTAINER_STARTED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535593495Z" level=warning msg="container event discarded" container=7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046 type=CONTAINER_CREATED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535603252Z" level=warning msg="container event discarded" container=7e46ef3480c0747981eb7db911e26dd3aaaaae90bd8978dac4f04f77f72df046 type=CONTAINER_STARTED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535613181Z" level=warning msg="container event discarded" container=35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f type=CONTAINER_CREATED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535622448Z" level=warning msg="container event discarded" container=c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9 type=CONTAINER_CREATED_EVENT Jan 23 19:03:53.536451 containerd[1549]: time="2026-01-23T19:03:53.535631966Z" level=warning msg="container event discarded" container=afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40 type=CONTAINER_CREATED_EVENT Jan 23 19:03:53.744577 containerd[1549]: time="2026-01-23T19:03:53.741445444Z" level=warning msg="container event discarded" container=35b6feeefabf6ae7c365101d5afcfec9e235c6766a81cbb22122b4e69f62ad8f type=CONTAINER_STARTED_EVENT Jan 23 19:03:53.821027 containerd[1549]: time="2026-01-23T19:03:53.819086061Z" level=warning msg="container event discarded" container=afc4903cfa079960c018717357c96556706d1de0d11593bd3c55bf4c7ac91b40 type=CONTAINER_STARTED_EVENT Jan 23 19:03:53.855785 containerd[1549]: time="2026-01-23T19:03:53.855526216Z" level=warning msg="container event discarded" container=c024d8b48a7ed7acaabb867a02b8bd46ae9096eb58e0acca22802ef2ddb184b9 type=CONTAINER_STARTED_EVENT Jan 23 19:03:54.153176 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:50184.service - OpenSSH per-connection server daemon (10.0.0.1:50184). Jan 23 19:03:54.478660 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 50184 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:03:54.481515 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:03:54.561093 systemd-logind[1539]: New session 18 of user core. Jan 23 19:03:54.581136 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:03:55.392215 sshd[4516]: Connection closed by 10.0.0.1 port 50184 Jan 23 19:03:55.443649 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jan 23 19:03:55.456427 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:50184.service: Deactivated successfully. Jan 23 19:03:55.463465 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:03:55.485233 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:03:55.504211 systemd-logind[1539]: Removed session 18. Jan 23 19:04:00.489012 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:37324.service - OpenSSH per-connection server daemon (10.0.0.1:37324). Jan 23 19:04:00.982541 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 37324 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:00.987609 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:01.061197 systemd-logind[1539]: New session 19 of user core. Jan 23 19:04:01.087795 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:04:01.644432 sshd[4533]: Connection closed by 10.0.0.1 port 37324 Jan 23 19:04:01.658089 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:01.709195 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:37324.service: Deactivated successfully. Jan 23 19:04:01.722047 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:04:01.737619 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:04:01.750188 systemd-logind[1539]: Removed session 19. Jan 23 19:04:06.722069 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:52640.service - OpenSSH per-connection server daemon (10.0.0.1:52640). Jan 23 19:04:07.297155 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 52640 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:07.309191 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:07.356374 systemd-logind[1539]: New session 20 of user core. Jan 23 19:04:07.385834 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:04:08.198888 sshd[4550]: Connection closed by 10.0.0.1 port 52640 Jan 23 19:04:08.201634 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:08.240769 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:52640.service: Deactivated successfully. Jan 23 19:04:08.266487 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:04:08.293698 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:04:08.319823 systemd-logind[1539]: Removed session 20. Jan 23 19:04:13.248734 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:52644.service - OpenSSH per-connection server daemon (10.0.0.1:52644). Jan 23 19:04:13.462225 sshd[4567]: Accepted publickey for core from 10.0.0.1 port 52644 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:13.469699 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:13.522395 systemd-logind[1539]: New session 21 of user core. Jan 23 19:04:13.532738 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:04:13.943496 sshd[4570]: Connection closed by 10.0.0.1 port 52644 Jan 23 19:04:13.944206 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:13.952605 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:52644.service: Deactivated successfully. Jan 23 19:04:13.957624 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:04:13.960583 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:04:13.965050 systemd-logind[1539]: Removed session 21. Jan 23 19:04:16.531074 containerd[1549]: time="2026-01-23T19:04:16.529924336Z" level=warning msg="container event discarded" container=ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a type=CONTAINER_CREATED_EVENT Jan 23 19:04:16.531074 containerd[1549]: time="2026-01-23T19:04:16.530793305Z" level=warning msg="container event discarded" container=ef09aa93b7338c6e721319fd8a9e699674e1c537b9b5824630088b72c5d1c65a type=CONTAINER_STARTED_EVENT Jan 23 19:04:16.631158 containerd[1549]: time="2026-01-23T19:04:16.631041389Z" level=warning msg="container event discarded" container=d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0 type=CONTAINER_CREATED_EVENT Jan 23 19:04:16.632375 containerd[1549]: time="2026-01-23T19:04:16.632203013Z" level=warning msg="container event discarded" container=d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0 type=CONTAINER_STARTED_EVENT Jan 23 19:04:16.632375 containerd[1549]: time="2026-01-23T19:04:16.632233961Z" level=warning msg="container event discarded" container=2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb type=CONTAINER_CREATED_EVENT Jan 23 19:04:16.632375 containerd[1549]: time="2026-01-23T19:04:16.632321294Z" level=warning msg="container event discarded" container=2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb type=CONTAINER_STARTED_EVENT Jan 23 19:04:16.728512 containerd[1549]: time="2026-01-23T19:04:16.728393332Z" level=warning msg="container event discarded" container=5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37 type=CONTAINER_CREATED_EVENT Jan 23 19:04:17.061069 containerd[1549]: time="2026-01-23T19:04:17.060572812Z" level=warning msg="container event discarded" container=5c64b27b4666e4cc407b0f129397b86c6d2ab27a6327b4602b4e105f4d599e37 type=CONTAINER_STARTED_EVENT Jan 23 19:04:18.968903 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:52128.service - OpenSSH per-connection server daemon (10.0.0.1:52128). Jan 23 19:04:19.141775 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 52128 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:19.146215 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:19.165596 systemd-logind[1539]: New session 22 of user core. Jan 23 19:04:19.171661 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:04:19.485554 sshd[4590]: Connection closed by 10.0.0.1 port 52128 Jan 23 19:04:19.486505 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:19.504679 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:52128.service: Deactivated successfully. Jan 23 19:04:19.505233 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:04:19.510688 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:04:19.520677 systemd-logind[1539]: Removed session 22. Jan 23 19:04:23.386095 kubelet[2823]: E0123 19:04:23.383174 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:24.519139 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:41610.service - OpenSSH per-connection server daemon (10.0.0.1:41610). Jan 23 19:04:24.681936 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 41610 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:24.685533 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:24.712954 systemd-logind[1539]: New session 23 of user core. Jan 23 19:04:24.725796 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:04:25.116455 sshd[4609]: Connection closed by 10.0.0.1 port 41610 Jan 23 19:04:25.117496 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:25.124922 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:41610.service: Deactivated successfully. Jan 23 19:04:25.128177 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:04:25.130186 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:04:25.136982 systemd-logind[1539]: Removed session 23. Jan 23 19:04:26.392413 kubelet[2823]: E0123 19:04:26.391834 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:28.545017 containerd[1549]: time="2026-01-23T19:04:28.544884372Z" level=warning msg="container event discarded" container=f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930 type=CONTAINER_CREATED_EVENT Jan 23 19:04:28.957842 containerd[1549]: time="2026-01-23T19:04:28.956163122Z" level=warning msg="container event discarded" container=f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930 type=CONTAINER_STARTED_EVENT Jan 23 19:04:30.168791 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:41612.service - OpenSSH per-connection server daemon (10.0.0.1:41612). Jan 23 19:04:30.480138 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 41612 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:30.484339 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:30.518919 systemd-logind[1539]: New session 24 of user core. Jan 23 19:04:30.532671 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:04:30.921079 sshd[4626]: Connection closed by 10.0.0.1 port 41612 Jan 23 19:04:30.922513 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:30.942044 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:41612.service: Deactivated successfully. Jan 23 19:04:30.945384 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:04:30.945703 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:04:30.957520 systemd-logind[1539]: Removed session 24. Jan 23 19:04:32.397380 kubelet[2823]: E0123 19:04:32.396952 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:35.396937 kubelet[2823]: E0123 19:04:35.392559 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:36.095343 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). Jan 23 19:04:37.187767 sshd[4640]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:37.197328 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:37.263871 systemd-logind[1539]: New session 25 of user core. Jan 23 19:04:37.295666 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:04:37.998843 sshd[4643]: Connection closed by 10.0.0.1 port 40778 Jan 23 19:04:38.011688 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:38.051123 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:40778.service: Deactivated successfully. Jan 23 19:04:38.064077 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:04:38.125977 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:04:38.140674 systemd-logind[1539]: Removed session 25. Jan 23 19:04:43.052818 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:40786.service - OpenSSH per-connection server daemon (10.0.0.1:40786). Jan 23 19:04:43.294469 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 40786 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:43.298380 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:43.309374 systemd-logind[1539]: New session 26 of user core. Jan 23 19:04:43.322674 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:04:43.637865 sshd[4661]: Connection closed by 10.0.0.1 port 40786 Jan 23 19:04:43.639157 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:43.657942 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:40786.service: Deactivated successfully. Jan 23 19:04:43.664997 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:04:43.675509 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:04:43.683080 systemd-logind[1539]: Removed session 26. Jan 23 19:04:47.890678 kubelet[2823]: E0123 19:04:47.884588 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:50.350017 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). Jan 23 19:04:51.283495 kubelet[2823]: E0123 19:04:51.264284 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.548s" Jan 23 19:04:51.283495 kubelet[2823]: E0123 19:04:51.272065 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:51.283495 kubelet[2823]: E0123 19:04:51.280029 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:54.191543 sshd[4676]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:04:54.212798 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:54.255808 kubelet[2823]: E0123 19:04:54.255700 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.352s" Jan 23 19:04:54.270217 systemd-logind[1539]: New session 27 of user core. Jan 23 19:04:54.312522 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:04:56.480107 sshd[4682]: Connection closed by 10.0.0.1 port 60572 Jan 23 19:04:56.487027 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:56.526208 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:60572.service: Deactivated successfully. Jan 23 19:04:56.547638 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:04:56.558822 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:04:56.566731 systemd-logind[1539]: Removed session 27. Jan 23 19:05:01.380945 kubelet[2823]: E0123 19:05:01.380159 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:01.544657 systemd[1]: Started sshd@27-10.0.0.36:22-10.0.0.1:53300.service - OpenSSH per-connection server daemon (10.0.0.1:53300). Jan 23 19:05:01.832617 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 53300 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:01.835088 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:01.867654 systemd-logind[1539]: New session 28 of user core. Jan 23 19:05:01.898954 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 19:05:02.427397 sshd[4700]: Connection closed by 10.0.0.1 port 53300 Jan 23 19:05:02.430006 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:02.493218 systemd[1]: sshd@27-10.0.0.36:22-10.0.0.1:53300.service: Deactivated successfully. Jan 23 19:05:02.498548 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 19:05:02.503020 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Jan 23 19:05:02.523123 systemd[1]: Started sshd@28-10.0.0.36:22-10.0.0.1:53316.service - OpenSSH per-connection server daemon (10.0.0.1:53316). Jan 23 19:05:02.525354 systemd-logind[1539]: Removed session 28. Jan 23 19:05:02.735922 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 53316 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:02.740886 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:02.768943 systemd-logind[1539]: New session 29 of user core. Jan 23 19:05:02.786782 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 19:05:03.526809 sshd[4717]: Connection closed by 10.0.0.1 port 53316 Jan 23 19:05:03.533191 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:03.564087 systemd[1]: Started sshd@29-10.0.0.36:22-10.0.0.1:53320.service - OpenSSH per-connection server daemon (10.0.0.1:53320). Jan 23 19:05:03.565232 systemd[1]: sshd@28-10.0.0.36:22-10.0.0.1:53316.service: Deactivated successfully. Jan 23 19:05:03.573772 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 19:05:03.588965 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Jan 23 19:05:03.606609 systemd-logind[1539]: Removed session 29. Jan 23 19:05:03.764485 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 53320 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:03.770177 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:03.796173 systemd-logind[1539]: New session 30 of user core. Jan 23 19:05:03.832693 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 19:05:04.181414 sshd[4732]: Connection closed by 10.0.0.1 port 53320 Jan 23 19:05:04.183614 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:04.193141 systemd[1]: sshd@29-10.0.0.36:22-10.0.0.1:53320.service: Deactivated successfully. Jan 23 19:05:04.205739 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 19:05:04.219435 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Jan 23 19:05:04.227903 systemd-logind[1539]: Removed session 30. Jan 23 19:05:09.250755 systemd[1]: Started sshd@30-10.0.0.36:22-10.0.0.1:53478.service - OpenSSH per-connection server daemon (10.0.0.1:53478). Jan 23 19:05:09.452920 sshd[4745]: Accepted publickey for core from 10.0.0.1 port 53478 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:09.459760 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:09.479923 systemd-logind[1539]: New session 31 of user core. Jan 23 19:05:09.495574 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 19:05:09.925979 sshd[4748]: Connection closed by 10.0.0.1 port 53478 Jan 23 19:05:09.929215 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:09.948883 systemd[1]: sshd@30-10.0.0.36:22-10.0.0.1:53478.service: Deactivated successfully. Jan 23 19:05:09.962952 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 19:05:09.970215 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Jan 23 19:05:09.978655 systemd-logind[1539]: Removed session 31. Jan 23 19:05:11.477017 containerd[1549]: time="2026-01-23T19:05:11.476564046Z" level=warning msg="container event discarded" container=2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895 type=CONTAINER_CREATED_EVENT Jan 23 19:05:11.951672 containerd[1549]: time="2026-01-23T19:05:11.946925318Z" level=warning msg="container event discarded" container=2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895 type=CONTAINER_STARTED_EVENT Jan 23 19:05:12.451464 containerd[1549]: time="2026-01-23T19:05:12.451382972Z" level=warning msg="container event discarded" container=2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895 type=CONTAINER_STOPPED_EVENT Jan 23 19:05:12.854594 containerd[1549]: time="2026-01-23T19:05:12.854428069Z" level=warning msg="container event discarded" container=bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3 type=CONTAINER_CREATED_EVENT Jan 23 19:05:13.035816 containerd[1549]: time="2026-01-23T19:05:13.034429499Z" level=warning msg="container event discarded" container=bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3 type=CONTAINER_STARTED_EVENT Jan 23 19:05:13.298340 containerd[1549]: time="2026-01-23T19:05:13.297592474Z" level=warning msg="container event discarded" container=bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3 type=CONTAINER_STOPPED_EVENT Jan 23 19:05:13.846510 containerd[1549]: time="2026-01-23T19:05:13.843151143Z" level=warning msg="container event discarded" container=d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c type=CONTAINER_CREATED_EVENT Jan 23 19:05:14.118748 containerd[1549]: time="2026-01-23T19:05:14.118564408Z" level=warning msg="container event discarded" container=d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c type=CONTAINER_STARTED_EVENT Jan 23 19:05:14.272754 containerd[1549]: time="2026-01-23T19:05:14.272406329Z" level=warning msg="container event discarded" container=d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c type=CONTAINER_STOPPED_EVENT Jan 23 19:05:14.839149 containerd[1549]: time="2026-01-23T19:05:14.836067778Z" level=warning msg="container event discarded" container=a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139 type=CONTAINER_CREATED_EVENT Jan 23 19:05:14.990985 systemd[1]: Started sshd@31-10.0.0.36:22-10.0.0.1:60068.service - OpenSSH per-connection server daemon (10.0.0.1:60068). Jan 23 19:05:15.056723 containerd[1549]: time="2026-01-23T19:05:15.056640969Z" level=warning msg="container event discarded" container=a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139 type=CONTAINER_STARTED_EVENT Jan 23 19:05:15.191899 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 60068 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:15.191184 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:15.214621 containerd[1549]: time="2026-01-23T19:05:15.213001614Z" level=warning msg="container event discarded" container=a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139 type=CONTAINER_STOPPED_EVENT Jan 23 19:05:15.224188 systemd-logind[1539]: New session 32 of user core. Jan 23 19:05:15.241761 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 19:05:15.954729 containerd[1549]: time="2026-01-23T19:05:15.950405756Z" level=warning msg="container event discarded" container=a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521 type=CONTAINER_CREATED_EVENT Jan 23 19:05:15.980462 sshd[4767]: Connection closed by 10.0.0.1 port 60068 Jan 23 19:05:15.976764 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:15.992313 systemd[1]: sshd@31-10.0.0.36:22-10.0.0.1:60068.service: Deactivated successfully. Jan 23 19:05:16.005937 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 19:05:16.041537 systemd-logind[1539]: Session 32 logged out. Waiting for processes to exit. Jan 23 19:05:16.051994 systemd-logind[1539]: Removed session 32. Jan 23 19:05:16.271939 containerd[1549]: time="2026-01-23T19:05:16.266866010Z" level=warning msg="container event discarded" container=a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521 type=CONTAINER_STARTED_EVENT Jan 23 19:05:21.024699 systemd[1]: Started sshd@32-10.0.0.36:22-10.0.0.1:60078.service - OpenSSH per-connection server daemon (10.0.0.1:60078). Jan 23 19:05:21.210485 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 60078 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:21.217819 sshd-session[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:21.248943 systemd-logind[1539]: New session 33 of user core. Jan 23 19:05:21.264576 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 19:05:21.620186 sshd[4785]: Connection closed by 10.0.0.1 port 60078 Jan 23 19:05:21.621639 sshd-session[4781]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:21.640739 systemd[1]: sshd@32-10.0.0.36:22-10.0.0.1:60078.service: Deactivated successfully. Jan 23 19:05:21.655565 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 19:05:21.661809 systemd-logind[1539]: Session 33 logged out. Waiting for processes to exit. Jan 23 19:05:21.675967 systemd-logind[1539]: Removed session 33. Jan 23 19:05:26.670157 systemd[1]: Started sshd@33-10.0.0.36:22-10.0.0.1:46564.service - OpenSSH per-connection server daemon (10.0.0.1:46564). Jan 23 19:05:27.057710 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 46564 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:27.072788 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:27.101811 systemd-logind[1539]: New session 34 of user core. Jan 23 19:05:27.140413 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 19:05:27.467308 sshd[4803]: Connection closed by 10.0.0.1 port 46564 Jan 23 19:05:27.467049 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:27.479765 systemd[1]: sshd@33-10.0.0.36:22-10.0.0.1:46564.service: Deactivated successfully. Jan 23 19:05:27.483366 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 19:05:27.487853 systemd-logind[1539]: Session 34 logged out. Waiting for processes to exit. Jan 23 19:05:27.495605 systemd-logind[1539]: Removed session 34. Jan 23 19:05:30.393457 kubelet[2823]: E0123 19:05:30.387423 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:32.782816 systemd[1]: Started sshd@34-10.0.0.36:22-10.0.0.1:46568.service - OpenSSH per-connection server daemon (10.0.0.1:46568). Jan 23 19:05:33.765124 containerd[1549]: time="2026-01-23T19:05:33.760919115Z" level=warning msg="container event discarded" container=42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f type=CONTAINER_CREATED_EVENT Jan 23 19:05:34.437899 containerd[1549]: time="2026-01-23T19:05:33.889733349Z" level=warning msg="container event discarded" container=42b61effbc0e4e3de04768792f22df174c84fec8dc48986e6e61754d1f0a328f type=CONTAINER_STARTED_EVENT Jan 23 19:05:34.451102 containerd[1549]: time="2026-01-23T19:05:34.450473351Z" level=warning msg="container event discarded" container=223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558 type=CONTAINER_CREATED_EVENT Jan 23 19:05:34.451102 containerd[1549]: time="2026-01-23T19:05:34.450676897Z" level=warning msg="container event discarded" container=223f1adbc8741544fd3de9cad39902b33c1e81b51900262c033355c816a7d558 type=CONTAINER_STARTED_EVENT Jan 23 19:05:35.737693 containerd[1549]: time="2026-01-23T19:05:35.057941318Z" level=warning msg="container event discarded" container=efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625 type=CONTAINER_CREATED_EVENT Jan 23 19:05:35.737693 containerd[1549]: time="2026-01-23T19:05:35.058937198Z" level=warning msg="container event discarded" container=79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04 type=CONTAINER_CREATED_EVENT Jan 23 19:05:35.737693 containerd[1549]: time="2026-01-23T19:05:35.058959069Z" level=warning msg="container event discarded" container=efaffbdc84bd5f6edc66c50001437824734392bfb998d3a8173bf48045131625 type=CONTAINER_STARTED_EVENT Jan 23 19:05:35.737693 containerd[1549]: time="2026-01-23T19:05:35.058969498Z" level=warning msg="container event discarded" container=79347f7d7da89cf2f7640859d030bc4b9cd78e0106cb5cba8d705c5ff2c3da04 type=CONTAINER_STARTED_EVENT Jan 23 19:05:40.909556 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:40.987709 kubelet[2823]: E0123 19:05:40.987455 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.532s" Jan 23 19:05:41.012159 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:41.017680 kubelet[2823]: E0123 19:05:41.017646 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:41.020489 kubelet[2823]: E0123 19:05:41.019761 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:41.097884 systemd-logind[1539]: New session 35 of user core. Jan 23 19:05:41.103761 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 19:05:43.467769 sshd[4821]: Connection closed by 10.0.0.1 port 46568 Jan 23 19:05:43.474419 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:43.490438 systemd[1]: sshd@34-10.0.0.36:22-10.0.0.1:46568.service: Deactivated successfully. Jan 23 19:05:43.499660 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 19:05:43.543528 systemd-logind[1539]: Session 35 logged out. Waiting for processes to exit. Jan 23 19:05:43.565203 systemd-logind[1539]: Removed session 35. Jan 23 19:05:47.379708 kubelet[2823]: E0123 19:05:47.379662 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:48.535216 systemd[1]: Started sshd@35-10.0.0.36:22-10.0.0.1:57192.service - OpenSSH per-connection server daemon (10.0.0.1:57192). Jan 23 19:05:48.982227 sshd[4836]: Accepted publickey for core from 10.0.0.1 port 57192 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:05:48.993802 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:05:49.047703 systemd-logind[1539]: New session 36 of user core. Jan 23 19:05:49.069182 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 19:05:49.586770 sshd[4839]: Connection closed by 10.0.0.1 port 57192 Jan 23 19:05:49.589209 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:49.643490 systemd[1]: sshd@35-10.0.0.36:22-10.0.0.1:57192.service: Deactivated successfully. Jan 23 19:05:49.662090 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 19:05:49.677694 systemd-logind[1539]: Session 36 logged out. Waiting for processes to exit. Jan 23 19:05:49.718491 systemd-logind[1539]: Removed session 36. Jan 23 19:05:53.336779 kubelet[2823]: E0123 19:05:53.335730 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:57.342592 systemd[1]: Started sshd@36-10.0.0.36:22-10.0.0.1:48394.service - OpenSSH per-connection server daemon (10.0.0.1:48394). Jan 23 19:06:02.635228 kubelet[2823]: E0123 19:06:02.630167 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:03.422974 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 48394 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:03.895384 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:05.543534 systemd-logind[1539]: New session 37 of user core. Jan 23 19:06:05.598138 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 19:06:06.287704 kubelet[2823]: E0123 19:06:06.287660 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.895s" Jan 23 19:06:07.078641 kubelet[2823]: E0123 19:06:07.066892 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:07.653186 kubelet[2823]: E0123 19:06:07.649752 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.001s" Jan 23 19:06:07.875720 systemd[1]: cri-containerd-f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930.scope: Deactivated successfully. Jan 23 19:06:07.882841 systemd[1]: cri-containerd-f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930.scope: Consumed 5.131s CPU time, 29.1M memory peak, 964K read from disk, 4K written to disk. Jan 23 19:06:07.899660 containerd[1549]: time="2026-01-23T19:06:07.897973802Z" level=info msg="received container exit event container_id:\"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\" id:\"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\" pid:3226 exit_status:1 exited_at:{seconds:1769195167 nanos:890062195}" Jan 23 19:06:08.167491 sshd[4858]: Connection closed by 10.0.0.1 port 48394 Jan 23 19:06:08.161910 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:08.216731 systemd[1]: sshd@36-10.0.0.36:22-10.0.0.1:48394.service: Deactivated successfully. Jan 23 19:06:08.218649 systemd[1]: sshd@36-10.0.0.36:22-10.0.0.1:48394.service: Consumed 1.614s CPU time, 3.2M memory peak. Jan 23 19:06:08.231748 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 19:06:08.265192 systemd-logind[1539]: Session 37 logged out. Waiting for processes to exit. Jan 23 19:06:08.288190 systemd[1]: Started sshd@37-10.0.0.36:22-10.0.0.1:41602.service - OpenSSH per-connection server daemon (10.0.0.1:41602). Jan 23 19:06:08.318441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930-rootfs.mount: Deactivated successfully. Jan 23 19:06:08.329429 systemd-logind[1539]: Removed session 37. Jan 23 19:06:08.477186 sshd[4884]: Accepted publickey for core from 10.0.0.1 port 41602 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:08.482759 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:08.510980 systemd-logind[1539]: New session 38 of user core. Jan 23 19:06:08.529986 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 19:06:09.746172 kubelet[2823]: I0123 19:06:09.742232 2823 scope.go:117] "RemoveContainer" containerID="f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930" Jan 23 19:06:09.746172 kubelet[2823]: E0123 19:06:09.742990 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:09.931733 containerd[1549]: time="2026-01-23T19:06:09.930886408Z" level=info msg="CreateContainer within sandbox \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 23 19:06:13.636688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144613541.mount: Deactivated successfully. Jan 23 19:06:13.818679 containerd[1549]: time="2026-01-23T19:06:13.747942073Z" level=info msg="Container 2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:14.084024 kubelet[2823]: E0123 19:06:14.047147 2823 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.574s" Jan 23 19:06:14.341531 containerd[1549]: time="2026-01-23T19:06:14.289208104Z" level=info msg="CreateContainer within sandbox \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\"" Jan 23 19:06:14.398661 containerd[1549]: time="2026-01-23T19:06:14.397098526Z" level=info msg="StartContainer for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\"" Jan 23 19:06:14.639635 kubelet[2823]: E0123 19:06:14.639592 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:14.672425 containerd[1549]: time="2026-01-23T19:06:14.670562194Z" level=info msg="connecting to shim 2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7" address="unix:///run/containerd/s/a116842bace4fcfb72943387d6e041b1b1b38148ba8366505421aaef6bb45755" protocol=ttrpc version=3 Jan 23 19:06:15.066665 systemd[1]: Started cri-containerd-2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7.scope - libcontainer container 2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7. Jan 23 19:06:16.472595 sshd[4888]: Connection closed by 10.0.0.1 port 41602 Jan 23 19:06:16.473474 containerd[1549]: time="2026-01-23T19:06:16.473372409Z" level=info msg="StartContainer for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" returns successfully" Jan 23 19:06:16.474441 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:16.501497 systemd[1]: sshd@37-10.0.0.36:22-10.0.0.1:41602.service: Deactivated successfully. Jan 23 19:06:16.537380 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 19:06:16.540386 systemd[1]: session-38.scope: Consumed 2.067s CPU time, 55.3M memory peak. Jan 23 19:06:16.547080 systemd-logind[1539]: Session 38 logged out. Waiting for processes to exit. Jan 23 19:06:16.550342 systemd[1]: Started sshd@38-10.0.0.36:22-10.0.0.1:38858.service - OpenSSH per-connection server daemon (10.0.0.1:38858). Jan 23 19:06:16.560402 systemd-logind[1539]: Removed session 38. Jan 23 19:06:17.381673 kubelet[2823]: E0123 19:06:17.380076 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:17.397214 sshd[4933]: Accepted publickey for core from 10.0.0.1 port 38858 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:17.438541 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:17.463787 systemd-logind[1539]: New session 39 of user core. Jan 23 19:06:17.478554 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 19:06:18.392078 kubelet[2823]: E0123 19:06:18.389162 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:19.921062 sshd[4940]: Connection closed by 10.0.0.1 port 38858 Jan 23 19:06:19.916234 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:19.947453 systemd[1]: sshd@38-10.0.0.36:22-10.0.0.1:38858.service: Deactivated successfully. Jan 23 19:06:19.955514 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 19:06:19.959821 systemd-logind[1539]: Session 39 logged out. Waiting for processes to exit. Jan 23 19:06:19.969793 systemd[1]: Started sshd@39-10.0.0.36:22-10.0.0.1:38862.service - OpenSSH per-connection server daemon (10.0.0.1:38862). Jan 23 19:06:19.976991 systemd-logind[1539]: Removed session 39. Jan 23 19:06:20.297771 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:20.315573 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:20.353495 systemd-logind[1539]: New session 40 of user core. Jan 23 19:06:20.379561 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 19:06:21.657350 sshd[4968]: Connection closed by 10.0.0.1 port 38862 Jan 23 19:06:21.664482 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:21.676066 systemd[1]: sshd@39-10.0.0.36:22-10.0.0.1:38862.service: Deactivated successfully. Jan 23 19:06:21.681455 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 19:06:21.685448 systemd-logind[1539]: Session 40 logged out. Waiting for processes to exit. Jan 23 19:06:21.728947 systemd[1]: Started sshd@40-10.0.0.36:22-10.0.0.1:38874.service - OpenSSH per-connection server daemon (10.0.0.1:38874). Jan 23 19:06:21.745028 systemd-logind[1539]: Removed session 40. Jan 23 19:06:21.940809 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 38874 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:21.948450 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:21.989392 systemd-logind[1539]: New session 41 of user core. Jan 23 19:06:22.014639 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 19:06:22.462155 sshd[4985]: Connection closed by 10.0.0.1 port 38874 Jan 23 19:06:22.474066 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:22.493216 systemd[1]: sshd@40-10.0.0.36:22-10.0.0.1:38874.service: Deactivated successfully. Jan 23 19:06:22.511898 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 19:06:22.516156 systemd-logind[1539]: Session 41 logged out. Waiting for processes to exit. Jan 23 19:06:22.528847 systemd-logind[1539]: Removed session 41. Jan 23 19:06:27.532520 systemd[1]: Started sshd@41-10.0.0.36:22-10.0.0.1:47020.service - OpenSSH per-connection server daemon (10.0.0.1:47020). Jan 23 19:06:27.836484 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 47020 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:27.835165 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:27.861902 systemd-logind[1539]: New session 42 of user core. Jan 23 19:06:27.888444 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 19:06:28.579096 sshd[5002]: Connection closed by 10.0.0.1 port 47020 Jan 23 19:06:28.582071 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:28.639045 systemd[1]: sshd@41-10.0.0.36:22-10.0.0.1:47020.service: Deactivated successfully. Jan 23 19:06:28.647803 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 19:06:28.659177 systemd-logind[1539]: Session 42 logged out. Waiting for processes to exit. Jan 23 19:06:28.665760 systemd-logind[1539]: Removed session 42. Jan 23 19:06:33.132821 update_engine[1541]: I20260123 19:06:33.129397 1541 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 19:06:33.132821 update_engine[1541]: I20260123 19:06:33.129624 1541 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 19:06:33.132821 update_engine[1541]: I20260123 19:06:33.130126 1541 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 19:06:33.132821 update_engine[1541]: I20260123 19:06:33.131022 1541 omaha_request_params.cc:62] Current group set to stable Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143428 1541 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143476 1541 update_attempter.cc:643] Scheduling an action processor start. Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143505 1541 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143566 1541 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143738 1541 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143754 1541 omaha_request_action.cc:272] Request: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: Jan 23 19:06:33.143942 update_engine[1541]: I20260123 19:06:33.143764 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:33.167802 locksmithd[1576]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 19:06:33.181580 update_engine[1541]: I20260123 19:06:33.180764 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:33.183983 update_engine[1541]: I20260123 19:06:33.182781 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:33.220147 update_engine[1541]: E20260123 19:06:33.215085 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:33.220147 update_engine[1541]: I20260123 19:06:33.220091 1541 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 19:06:33.650592 systemd[1]: Started sshd@42-10.0.0.36:22-10.0.0.1:47036.service - OpenSSH per-connection server daemon (10.0.0.1:47036). Jan 23 19:06:33.937930 sshd[5015]: Accepted publickey for core from 10.0.0.1 port 47036 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:33.943359 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:33.970921 systemd-logind[1539]: New session 43 of user core. Jan 23 19:06:33.979616 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 23 19:06:34.777635 sshd[5018]: Connection closed by 10.0.0.1 port 47036 Jan 23 19:06:34.776596 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:34.796852 systemd[1]: sshd@42-10.0.0.36:22-10.0.0.1:47036.service: Deactivated successfully. Jan 23 19:06:34.813797 systemd[1]: session-43.scope: Deactivated successfully. Jan 23 19:06:34.826806 systemd-logind[1539]: Session 43 logged out. Waiting for processes to exit. Jan 23 19:06:34.838925 systemd-logind[1539]: Removed session 43. Jan 23 19:06:39.848865 systemd[1]: Started sshd@43-10.0.0.36:22-10.0.0.1:40142.service - OpenSSH per-connection server daemon (10.0.0.1:40142). Jan 23 19:06:40.126563 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 40142 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:40.119372 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:40.184585 systemd-logind[1539]: New session 44 of user core. Jan 23 19:06:40.192459 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 23 19:06:40.869937 sshd[5034]: Connection closed by 10.0.0.1 port 40142 Jan 23 19:06:40.871759 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:40.919978 systemd[1]: sshd@43-10.0.0.36:22-10.0.0.1:40142.service: Deactivated successfully. Jan 23 19:06:40.959484 systemd[1]: session-44.scope: Deactivated successfully. Jan 23 19:06:40.972774 systemd-logind[1539]: Session 44 logged out. Waiting for processes to exit. Jan 23 19:06:40.983068 systemd-logind[1539]: Removed session 44. Jan 23 19:06:43.142198 update_engine[1541]: I20260123 19:06:43.139714 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:43.142198 update_engine[1541]: I20260123 19:06:43.139837 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:43.142198 update_engine[1541]: I20260123 19:06:43.140550 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:43.171848 update_engine[1541]: E20260123 19:06:43.170725 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:43.171848 update_engine[1541]: I20260123 19:06:43.170929 1541 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 19:06:45.942847 systemd[1]: Started sshd@44-10.0.0.36:22-10.0.0.1:44794.service - OpenSSH per-connection server daemon (10.0.0.1:44794). Jan 23 19:06:46.159452 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 44794 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:46.168681 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:46.234854 systemd-logind[1539]: New session 45 of user core. Jan 23 19:06:46.264690 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 23 19:06:46.380443 kubelet[2823]: E0123 19:06:46.380000 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:46.796109 sshd[5051]: Connection closed by 10.0.0.1 port 44794 Jan 23 19:06:46.798585 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:46.834892 systemd[1]: sshd@44-10.0.0.36:22-10.0.0.1:44794.service: Deactivated successfully. Jan 23 19:06:46.840472 systemd[1]: session-45.scope: Deactivated successfully. Jan 23 19:06:46.846833 systemd-logind[1539]: Session 45 logged out. Waiting for processes to exit. Jan 23 19:06:46.853523 systemd-logind[1539]: Removed session 45. Jan 23 19:06:48.381376 kubelet[2823]: E0123 19:06:48.380744 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:51.834018 systemd[1]: Started sshd@45-10.0.0.36:22-10.0.0.1:44796.service - OpenSSH per-connection server daemon (10.0.0.1:44796). Jan 23 19:06:51.976406 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 44796 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:51.986156 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:52.015411 systemd-logind[1539]: New session 46 of user core. Jan 23 19:06:52.027948 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 23 19:06:52.331932 sshd[5070]: Connection closed by 10.0.0.1 port 44796 Jan 23 19:06:52.333386 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:52.343785 systemd[1]: sshd@45-10.0.0.36:22-10.0.0.1:44796.service: Deactivated successfully. Jan 23 19:06:52.348160 systemd[1]: session-46.scope: Deactivated successfully. Jan 23 19:06:52.357094 systemd-logind[1539]: Session 46 logged out. Waiting for processes to exit. Jan 23 19:06:52.365086 systemd-logind[1539]: Removed session 46. Jan 23 19:06:53.131208 update_engine[1541]: I20260123 19:06:53.130973 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:53.131208 update_engine[1541]: I20260123 19:06:53.131163 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:53.131208 update_engine[1541]: I20260123 19:06:53.131963 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:53.157216 update_engine[1541]: E20260123 19:06:53.157003 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:53.157216 update_engine[1541]: I20260123 19:06:53.157138 1541 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 19:06:53.379903 kubelet[2823]: E0123 19:06:53.379458 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:57.740467 systemd[1]: Started sshd@46-10.0.0.36:22-10.0.0.1:43764.service - OpenSSH per-connection server daemon (10.0.0.1:43764). Jan 23 19:06:58.133478 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:06:58.135872 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:58.161664 systemd-logind[1539]: New session 47 of user core. Jan 23 19:06:58.180970 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 23 19:06:58.581399 sshd[5088]: Connection closed by 10.0.0.1 port 43764 Jan 23 19:06:58.582517 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:58.604401 systemd[1]: sshd@46-10.0.0.36:22-10.0.0.1:43764.service: Deactivated successfully. Jan 23 19:06:58.611806 systemd[1]: session-47.scope: Deactivated successfully. Jan 23 19:06:58.624357 systemd-logind[1539]: Session 47 logged out. Waiting for processes to exit. Jan 23 19:06:58.633931 systemd-logind[1539]: Removed session 47. Jan 23 19:07:00.424540 kubelet[2823]: E0123 19:07:00.423961 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:03.140462 update_engine[1541]: I20260123 19:07:03.133013 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:07:03.188754 update_engine[1541]: I20260123 19:07:03.162781 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:07:03.253935 update_engine[1541]: I20260123 19:07:03.202203 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:07:03.268479 update_engine[1541]: E20260123 19:07:03.264659 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.264869 1541 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.264889 1541 omaha_request_action.cc:617] Omaha request response: Jan 23 19:07:03.268479 update_engine[1541]: E20260123 19:07:03.265641 1541 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.265884 1541 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.265898 1541 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.265908 1541 update_attempter.cc:306] Processing Done. Jan 23 19:07:03.268479 update_engine[1541]: E20260123 19:07:03.265934 1541 update_attempter.cc:619] Update failed. Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.265946 1541 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.265956 1541 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.265965 1541 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.266055 1541 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.266100 1541 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:07:03.268479 update_engine[1541]: I20260123 19:07:03.266115 1541 omaha_request_action.cc:272] Request: Jan 23 19:07:03.268479 update_engine[1541]: Jan 23 19:07:03.268479 update_engine[1541]: Jan 23 19:07:03.268479 update_engine[1541]: Jan 23 19:07:03.269635 update_engine[1541]: Jan 23 19:07:03.269635 update_engine[1541]: Jan 23 19:07:03.269635 update_engine[1541]: Jan 23 19:07:03.269635 update_engine[1541]: I20260123 19:07:03.266125 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:07:03.269635 update_engine[1541]: I20260123 19:07:03.266356 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:07:03.269635 update_engine[1541]: I20260123 19:07:03.267166 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:07:03.282832 locksmithd[1576]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 19:07:03.291075 update_engine[1541]: E20260123 19:07:03.287497 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:07:03.294668 update_engine[1541]: I20260123 19:07:03.294546 1541 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:07:03.295190 update_engine[1541]: I20260123 19:07:03.294840 1541 omaha_request_action.cc:617] Omaha request response: Jan 23 19:07:03.295190 update_engine[1541]: I20260123 19:07:03.294871 1541 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:07:03.295190 update_engine[1541]: I20260123 19:07:03.294883 1541 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:07:03.295190 update_engine[1541]: I20260123 19:07:03.294893 1541 update_attempter.cc:306] Processing Done. Jan 23 19:07:03.295190 update_engine[1541]: I20260123 19:07:03.294903 1541 update_attempter.cc:310] Error event sent. Jan 23 19:07:03.295190 update_engine[1541]: I20260123 19:07:03.295028 1541 update_check_scheduler.cc:74] Next update check in 45m29s Jan 23 19:07:03.429735 locksmithd[1576]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 19:07:03.645150 systemd[1]: Started sshd@47-10.0.0.36:22-10.0.0.1:43772.service - OpenSSH per-connection server daemon (10.0.0.1:43772). Jan 23 19:07:03.861169 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 43772 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:07:03.872412 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:03.920916 systemd-logind[1539]: New session 48 of user core. Jan 23 19:07:03.945070 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 23 19:07:04.448726 sshd[5105]: Connection closed by 10.0.0.1 port 43772 Jan 23 19:07:04.449748 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:04.458901 systemd[1]: sshd@47-10.0.0.36:22-10.0.0.1:43772.service: Deactivated successfully. Jan 23 19:07:04.469734 systemd[1]: session-48.scope: Deactivated successfully. Jan 23 19:07:04.481754 systemd-logind[1539]: Session 48 logged out. Waiting for processes to exit. Jan 23 19:07:04.492080 systemd-logind[1539]: Removed session 48. Jan 23 19:07:09.385813 kubelet[2823]: E0123 19:07:09.380917 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:09.506233 systemd[1]: Started sshd@48-10.0.0.36:22-10.0.0.1:54518.service - OpenSSH per-connection server daemon (10.0.0.1:54518). Jan 23 19:07:09.709791 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 54518 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:07:09.716231 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:09.771503 systemd-logind[1539]: New session 49 of user core. Jan 23 19:07:09.803831 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 23 19:07:10.201977 sshd[5122]: Connection closed by 10.0.0.1 port 54518 Jan 23 19:07:10.200083 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:10.218868 systemd[1]: sshd@48-10.0.0.36:22-10.0.0.1:54518.service: Deactivated successfully. Jan 23 19:07:10.222674 systemd[1]: session-49.scope: Deactivated successfully. Jan 23 19:07:10.227860 systemd-logind[1539]: Session 49 logged out. Waiting for processes to exit. Jan 23 19:07:10.235921 systemd[1]: Started sshd@49-10.0.0.36:22-10.0.0.1:54524.service - OpenSSH per-connection server daemon (10.0.0.1:54524). Jan 23 19:07:10.242805 systemd-logind[1539]: Removed session 49. Jan 23 19:07:10.389828 kubelet[2823]: E0123 19:07:10.381696 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:10.401513 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 54524 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:07:10.405965 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:10.447150 systemd-logind[1539]: New session 50 of user core. Jan 23 19:07:10.475514 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 23 19:07:13.466908 containerd[1549]: time="2026-01-23T19:07:13.466624163Z" level=info msg="StopContainer for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" with timeout 30 (s)" Jan 23 19:07:13.472025 containerd[1549]: time="2026-01-23T19:07:13.471521560Z" level=info msg="Stop container \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" with signal terminated" Jan 23 19:07:13.602050 systemd[1]: cri-containerd-2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7.scope: Deactivated successfully. Jan 23 19:07:13.603407 systemd[1]: cri-containerd-2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7.scope: Consumed 1.201s CPU time, 26M memory peak, 980K read from disk, 4K written to disk. Jan 23 19:07:13.603927 containerd[1549]: time="2026-01-23T19:07:13.603880144Z" level=info msg="received container exit event container_id:\"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" id:\"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" pid:4912 exited_at:{seconds:1769195233 nanos:601732549}" Jan 23 19:07:13.661899 containerd[1549]: time="2026-01-23T19:07:13.660980165Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:07:13.680016 containerd[1549]: time="2026-01-23T19:07:13.679894178Z" level=info msg="StopContainer for \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" with timeout 2 (s)" Jan 23 19:07:13.681378 containerd[1549]: time="2026-01-23T19:07:13.681076402Z" level=info msg="Stop container \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" with signal terminated" Jan 23 19:07:13.746830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7-rootfs.mount: Deactivated successfully. Jan 23 19:07:13.770924 systemd-networkd[1468]: lxc_health: Link DOWN Jan 23 19:07:13.772545 systemd-networkd[1468]: lxc_health: Lost carrier Jan 23 19:07:13.829973 containerd[1549]: time="2026-01-23T19:07:13.829684996Z" level=info msg="StopContainer for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" returns successfully" Jan 23 19:07:13.832077 containerd[1549]: time="2026-01-23T19:07:13.832035562Z" level=info msg="StopPodSandbox for \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\"" Jan 23 19:07:13.844466 containerd[1549]: time="2026-01-23T19:07:13.842484490Z" level=info msg="Container to stop \"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:13.844466 containerd[1549]: time="2026-01-23T19:07:13.842535914Z" level=info msg="Container to stop \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:13.870232 systemd[1]: cri-containerd-a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521.scope: Deactivated successfully. Jan 23 19:07:13.873977 systemd[1]: cri-containerd-a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521.scope: Consumed 27.087s CPU time, 134.5M memory peak, 184K read from disk, 13.3M written to disk. Jan 23 19:07:13.886402 containerd[1549]: time="2026-01-23T19:07:13.885139462Z" level=info msg="received container exit event container_id:\"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" id:\"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" pid:3465 exited_at:{seconds:1769195233 nanos:877224145}" Jan 23 19:07:13.935684 systemd[1]: cri-containerd-d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0.scope: Deactivated successfully. Jan 23 19:07:13.954973 containerd[1549]: time="2026-01-23T19:07:13.954904737Z" level=info msg="received sandbox exit event container_id:\"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" id:\"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" exit_status:137 exited_at:{seconds:1769195233 nanos:952089380}" monitor_name=podsandbox Jan 23 19:07:14.222779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0-rootfs.mount: Deactivated successfully. Jan 23 19:07:14.274913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521-rootfs.mount: Deactivated successfully. Jan 23 19:07:14.312084 containerd[1549]: time="2026-01-23T19:07:14.289942866Z" level=info msg="shim disconnected" id=d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0 namespace=k8s.io Jan 23 19:07:14.312084 containerd[1549]: time="2026-01-23T19:07:14.289987339Z" level=warning msg="cleaning up after shim disconnected" id=d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0 namespace=k8s.io Jan 23 19:07:14.312084 containerd[1549]: time="2026-01-23T19:07:14.289998700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:07:14.369224 kubelet[2823]: I0123 19:07:14.362507 2823 scope.go:117] "RemoveContainer" containerID="f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930" Jan 23 19:07:14.370145 containerd[1549]: time="2026-01-23T19:07:14.369072828Z" level=info msg="StopContainer for \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" returns successfully" Jan 23 19:07:14.375503 containerd[1549]: time="2026-01-23T19:07:14.374744592Z" level=info msg="StopPodSandbox for \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\"" Jan 23 19:07:14.375503 containerd[1549]: time="2026-01-23T19:07:14.374852382Z" level=info msg="Container to stop \"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:14.375503 containerd[1549]: time="2026-01-23T19:07:14.374871017Z" level=info msg="Container to stop \"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:14.375503 containerd[1549]: time="2026-01-23T19:07:14.374883520Z" level=info msg="Container to stop \"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:14.375503 containerd[1549]: time="2026-01-23T19:07:14.374894861Z" level=info msg="Container to stop \"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:14.375503 containerd[1549]: time="2026-01-23T19:07:14.374948069Z" level=info msg="Container to stop \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:07:14.381491 containerd[1549]: time="2026-01-23T19:07:14.380222628Z" level=info msg="RemoveContainer for \"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\"" Jan 23 19:07:14.456147 containerd[1549]: time="2026-01-23T19:07:14.456091965Z" level=info msg="RemoveContainer for \"f680602c714b1971fce2260e1898ae3ae048b6114fe050921a691ba9f7a95930\" returns successfully" Jan 23 19:07:14.457103 systemd[1]: cri-containerd-2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb.scope: Deactivated successfully. Jan 23 19:07:14.470170 containerd[1549]: time="2026-01-23T19:07:14.469772311Z" level=info msg="received sandbox exit event container_id:\"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" id:\"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" exit_status:137 exited_at:{seconds:1769195234 nanos:463695505}" monitor_name=podsandbox Jan 23 19:07:14.542756 containerd[1549]: time="2026-01-23T19:07:14.541424964Z" level=info msg="received sandbox container exit event sandbox_id:\"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" exit_status:137 exited_at:{seconds:1769195233 nanos:952089380}" monitor_name=criService Jan 23 19:07:14.543006 containerd[1549]: time="2026-01-23T19:07:14.542953220Z" level=info msg="TearDown network for sandbox \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" successfully" Jan 23 19:07:14.543006 containerd[1549]: time="2026-01-23T19:07:14.542985359Z" level=info msg="StopPodSandbox for \"d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0\" returns successfully" Jan 23 19:07:14.543789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d32852f2e9e802a6a1a48f75df9a358e2a9ba34e1263139b6423eddfb6a3b6c0-shm.mount: Deactivated successfully. Jan 23 19:07:14.622233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb-rootfs.mount: Deactivated successfully. Jan 23 19:07:14.659089 containerd[1549]: time="2026-01-23T19:07:14.658172510Z" level=info msg="shim disconnected" id=2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb namespace=k8s.io Jan 23 19:07:14.659089 containerd[1549]: time="2026-01-23T19:07:14.658224255Z" level=warning msg="cleaning up after shim disconnected" id=2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb namespace=k8s.io Jan 23 19:07:14.659089 containerd[1549]: time="2026-01-23T19:07:14.658390134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:07:14.669930 kubelet[2823]: I0123 19:07:14.660815 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b78d5c71-6568-4895-bbae-9248ea64de26-cilium-config-path\") pod \"b78d5c71-6568-4895-bbae-9248ea64de26\" (UID: \"b78d5c71-6568-4895-bbae-9248ea64de26\") " Jan 23 19:07:14.669930 kubelet[2823]: I0123 19:07:14.660863 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9l2b\" (UniqueName: \"kubernetes.io/projected/b78d5c71-6568-4895-bbae-9248ea64de26-kube-api-access-g9l2b\") pod \"b78d5c71-6568-4895-bbae-9248ea64de26\" (UID: \"b78d5c71-6568-4895-bbae-9248ea64de26\") " Jan 23 19:07:14.670929 kubelet[2823]: I0123 19:07:14.670860 2823 scope.go:117] "RemoveContainer" containerID="2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7" Jan 23 19:07:14.681980 containerd[1549]: time="2026-01-23T19:07:14.681748188Z" level=info msg="RemoveContainer for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\"" Jan 23 19:07:14.696123 kubelet[2823]: I0123 19:07:14.695469 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78d5c71-6568-4895-bbae-9248ea64de26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b78d5c71-6568-4895-bbae-9248ea64de26" (UID: "b78d5c71-6568-4895-bbae-9248ea64de26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:07:14.701433 kubelet[2823]: I0123 19:07:14.701231 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78d5c71-6568-4895-bbae-9248ea64de26-kube-api-access-g9l2b" (OuterVolumeSpecName: "kube-api-access-g9l2b") pod "b78d5c71-6568-4895-bbae-9248ea64de26" (UID: "b78d5c71-6568-4895-bbae-9248ea64de26"). InnerVolumeSpecName "kube-api-access-g9l2b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:07:14.720898 containerd[1549]: time="2026-01-23T19:07:14.720771272Z" level=info msg="RemoveContainer for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" returns successfully" Jan 23 19:07:14.729735 kubelet[2823]: I0123 19:07:14.729068 2823 scope.go:117] "RemoveContainer" containerID="2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7" Jan 23 19:07:14.734090 containerd[1549]: time="2026-01-23T19:07:14.734003646Z" level=error msg="ContainerStatus for \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\": not found" Jan 23 19:07:14.737127 systemd[1]: var-lib-kubelet-pods-b78d5c71\x2d6568\x2d4895\x2dbbae\x2d9248ea64de26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg9l2b.mount: Deactivated successfully. Jan 23 19:07:14.742904 kubelet[2823]: E0123 19:07:14.742740 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\": not found" containerID="2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7" Jan 23 19:07:14.743001 kubelet[2823]: I0123 19:07:14.742873 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7"} err="failed to get container status \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a81f20fcb18a9126fecd15bdcf7471e44e96a794b405b3d3483db3d82713ac7\": not found" Jan 23 19:07:14.763770 kubelet[2823]: I0123 19:07:14.763098 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b78d5c71-6568-4895-bbae-9248ea64de26-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:14.763770 kubelet[2823]: I0123 19:07:14.763214 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9l2b\" (UniqueName: \"kubernetes.io/projected/b78d5c71-6568-4895-bbae-9248ea64de26-kube-api-access-g9l2b\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:14.771031 containerd[1549]: time="2026-01-23T19:07:14.770953753Z" level=info msg="received sandbox container exit event sandbox_id:\"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" exit_status:137 exited_at:{seconds:1769195234 nanos:463695505}" monitor_name=criService Jan 23 19:07:14.772351 containerd[1549]: time="2026-01-23T19:07:14.771701247Z" level=info msg="TearDown network for sandbox \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" successfully" Jan 23 19:07:14.772351 containerd[1549]: time="2026-01-23T19:07:14.771736081Z" level=info msg="StopPodSandbox for \"2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb\" returns successfully" Jan 23 19:07:14.779754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2054ac167b2b6ca30d533aa41762c4d19b856995c9511ece9541ec19213e33fb-shm.mount: Deactivated successfully. Jan 23 19:07:14.976465 kubelet[2823]: I0123 19:07:14.973747 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj6q4\" (UniqueName: \"kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-kube-api-access-zj6q4\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.976465 kubelet[2823]: I0123 19:07:14.973798 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hostproc\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.976465 kubelet[2823]: I0123 19:07:14.973829 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-cgroup\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.976465 kubelet[2823]: I0123 19:07:14.973856 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cni-path\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.976465 kubelet[2823]: I0123 19:07:14.973899 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91939f94-c884-4c8c-a9cd-81e863fc3bd2-clustermesh-secrets\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.976465 kubelet[2823]: I0123 19:07:14.974062 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.979443 kubelet[2823]: I0123 19:07:14.976217 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-bpf-maps\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979443 kubelet[2823]: I0123 19:07:14.978535 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hubble-tls\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979443 kubelet[2823]: I0123 19:07:14.978652 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-run\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979443 kubelet[2823]: I0123 19:07:14.978678 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-xtables-lock\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979443 kubelet[2823]: I0123 19:07:14.978706 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-kernel\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979443 kubelet[2823]: I0123 19:07:14.978730 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-config-path\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979776 kubelet[2823]: I0123 19:07:14.978748 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-lib-modules\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979776 kubelet[2823]: I0123 19:07:14.978771 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-etc-cni-netd\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979776 kubelet[2823]: I0123 19:07:14.978788 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-net\") pod \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\" (UID: \"91939f94-c884-4c8c-a9cd-81e863fc3bd2\") " Jan 23 19:07:14.979776 kubelet[2823]: I0123 19:07:14.978835 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:14.979776 kubelet[2823]: I0123 19:07:14.976418 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cni-path" (OuterVolumeSpecName: "cni-path") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.979776 kubelet[2823]: I0123 19:07:14.976440 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hostproc" (OuterVolumeSpecName: "hostproc") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.980000 kubelet[2823]: I0123 19:07:14.978873 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.980000 kubelet[2823]: I0123 19:07:14.978891 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.980000 kubelet[2823]: I0123 19:07:14.979682 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.980000 kubelet[2823]: I0123 19:07:14.979720 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.980000 kubelet[2823]: I0123 19:07:14.979745 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.980182 kubelet[2823]: I0123 19:07:14.979768 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:14.988789 kubelet[2823]: I0123 19:07:14.986039 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:07:15.021444 kubelet[2823]: I0123 19:07:15.019162 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91939f94-c884-4c8c-a9cd-81e863fc3bd2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:07:15.019894 systemd[1]: var-lib-kubelet-pods-91939f94\x2dc884\x2d4c8c\x2da9cd\x2d81e863fc3bd2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 19:07:15.056492 kubelet[2823]: I0123 19:07:15.056429 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:07:15.063644 kubelet[2823]: I0123 19:07:15.063217 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:07:15.068808 systemd[1]: var-lib-kubelet-pods-91939f94\x2dc884\x2d4c8c\x2da9cd\x2d81e863fc3bd2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj6q4.mount: Deactivated successfully. Jan 23 19:07:15.078399 kubelet[2823]: I0123 19:07:15.077472 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-kube-api-access-zj6q4" (OuterVolumeSpecName: "kube-api-access-zj6q4") pod "91939f94-c884-4c8c-a9cd-81e863fc3bd2" (UID: "91939f94-c884-4c8c-a9cd-81e863fc3bd2"). InnerVolumeSpecName "kube-api-access-zj6q4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:07:15.080504 kubelet[2823]: I0123 19:07:15.079084 2823 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.080504 kubelet[2823]: I0123 19:07:15.079194 2823 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.080504 kubelet[2823]: I0123 19:07:15.079210 2823 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.080504 kubelet[2823]: I0123 19:07:15.079226 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zj6q4\" (UniqueName: \"kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-kube-api-access-zj6q4\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.080504 kubelet[2823]: I0123 19:07:15.079536 2823 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.079144 systemd[1]: var-lib-kubelet-pods-91939f94\x2dc884\x2d4c8c\x2da9cd\x2d81e863fc3bd2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 19:07:15.082058 kubelet[2823]: I0123 19:07:15.081148 2823 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.082058 kubelet[2823]: I0123 19:07:15.081168 2823 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91939f94-c884-4c8c-a9cd-81e863fc3bd2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.082058 kubelet[2823]: I0123 19:07:15.081185 2823 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.082058 kubelet[2823]: I0123 19:07:15.081198 2823 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91939f94-c884-4c8c-a9cd-81e863fc3bd2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.082058 kubelet[2823]: I0123 19:07:15.081209 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.082058 kubelet[2823]: I0123 19:07:15.081220 2823 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.089916 kubelet[2823]: I0123 19:07:15.081231 2823 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91939f94-c884-4c8c-a9cd-81e863fc3bd2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.089916 kubelet[2823]: I0123 19:07:15.089184 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91939f94-c884-4c8c-a9cd-81e863fc3bd2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:07:15.087816 systemd[1]: Removed slice kubepods-besteffort-podb78d5c71_6568_4895_bbae_9248ea64de26.slice - libcontainer container kubepods-besteffort-podb78d5c71_6568_4895_bbae_9248ea64de26.slice. Jan 23 19:07:15.087945 systemd[1]: kubepods-besteffort-podb78d5c71_6568_4895_bbae_9248ea64de26.slice: Consumed 6.408s CPU time, 29.3M memory peak, 1.8M read from disk, 8K written to disk. Jan 23 19:07:15.157215 sshd[5139]: Connection closed by 10.0.0.1 port 54524 Jan 23 19:07:15.154694 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:15.184120 systemd[1]: sshd@49-10.0.0.36:22-10.0.0.1:54524.service: Deactivated successfully. Jan 23 19:07:15.191954 systemd[1]: session-50.scope: Deactivated successfully. Jan 23 19:07:15.192904 systemd[1]: session-50.scope: Consumed 1.250s CPU time, 27.1M memory peak. Jan 23 19:07:15.196496 systemd-logind[1539]: Session 50 logged out. Waiting for processes to exit. Jan 23 19:07:15.212423 systemd[1]: Started sshd@50-10.0.0.36:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). Jan 23 19:07:15.217919 systemd-logind[1539]: Removed session 50. Jan 23 19:07:15.415990 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:07:15.422216 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:15.448099 systemd-logind[1539]: New session 51 of user core. Jan 23 19:07:15.464113 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 23 19:07:15.812177 kubelet[2823]: I0123 19:07:15.811488 2823 scope.go:117] "RemoveContainer" containerID="a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521" Jan 23 19:07:15.847081 containerd[1549]: time="2026-01-23T19:07:15.844138398Z" level=info msg="RemoveContainer for \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\"" Jan 23 19:07:15.872542 systemd[1]: Removed slice kubepods-burstable-pod91939f94_c884_4c8c_a9cd_81e863fc3bd2.slice - libcontainer container kubepods-burstable-pod91939f94_c884_4c8c_a9cd_81e863fc3bd2.slice. Jan 23 19:07:15.873671 systemd[1]: kubepods-burstable-pod91939f94_c884_4c8c_a9cd_81e863fc3bd2.slice: Consumed 27.428s CPU time, 134.8M memory peak, 200K read from disk, 13.3M written to disk. Jan 23 19:07:15.884915 containerd[1549]: time="2026-01-23T19:07:15.884655707Z" level=info msg="RemoveContainer for \"a882d584548272d9252203610c8e2dc4f1f99732eab5583829b16f88f0eb1521\" returns successfully" Jan 23 19:07:15.889081 kubelet[2823]: I0123 19:07:15.884913 2823 scope.go:117] "RemoveContainer" containerID="a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139" Jan 23 19:07:15.918127 containerd[1549]: time="2026-01-23T19:07:15.916478607Z" level=info msg="RemoveContainer for \"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\"" Jan 23 19:07:15.969791 containerd[1549]: time="2026-01-23T19:07:15.965653544Z" level=info msg="RemoveContainer for \"a6ed51cc78328ba322e962c4cc3d3408dba4e95bd5292d0984ee9eb3a7ddc139\" returns successfully" Jan 23 19:07:15.969950 kubelet[2823]: I0123 19:07:15.965944 2823 scope.go:117] "RemoveContainer" containerID="d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c" Jan 23 19:07:15.983750 containerd[1549]: time="2026-01-23T19:07:15.982680135Z" level=info msg="RemoveContainer for \"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\"" Jan 23 19:07:16.026085 containerd[1549]: time="2026-01-23T19:07:16.025925724Z" level=info msg="RemoveContainer for \"d4cdc206069bb61b794948ba1a85f074bf7177fa37f5221fcf5d282bc935332c\" returns successfully" Jan 23 19:07:16.038501 kubelet[2823]: I0123 19:07:16.038133 2823 scope.go:117] "RemoveContainer" containerID="bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3" Jan 23 19:07:16.069934 containerd[1549]: time="2026-01-23T19:07:16.069050853Z" level=info msg="RemoveContainer for \"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\"" Jan 23 19:07:16.146816 containerd[1549]: time="2026-01-23T19:07:16.146695588Z" level=info msg="RemoveContainer for \"bbc18a9cf91a92365d766c0b14bd9a7526902ce587f2418df12bebef23a6e0d3\" returns successfully" Jan 23 19:07:16.147431 kubelet[2823]: I0123 19:07:16.146955 2823 scope.go:117] "RemoveContainer" containerID="2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895" Jan 23 19:07:16.154190 containerd[1549]: time="2026-01-23T19:07:16.154157577Z" level=info msg="RemoveContainer for \"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\"" Jan 23 19:07:16.171626 containerd[1549]: time="2026-01-23T19:07:16.171030031Z" level=info msg="RemoveContainer for \"2d485421a76f69d5639bed8dcc3fd9397b858df02cccebf06294ce1c5a6da895\" returns successfully" Jan 23 19:07:16.393828 kubelet[2823]: I0123 19:07:16.391762 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91939f94-c884-4c8c-a9cd-81e863fc3bd2" path="/var/lib/kubelet/pods/91939f94-c884-4c8c-a9cd-81e863fc3bd2/volumes" Jan 23 19:07:16.399760 kubelet[2823]: I0123 19:07:16.397208 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78d5c71-6568-4895-bbae-9248ea64de26" path="/var/lib/kubelet/pods/b78d5c71-6568-4895-bbae-9248ea64de26/volumes" Jan 23 19:07:18.150986 sshd[5291]: Connection closed by 10.0.0.1 port 60344 Jan 23 19:07:18.153671 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:18.178070 kubelet[2823]: E0123 19:07:18.177940 2823 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:07:18.182022 systemd[1]: sshd@50-10.0.0.36:22-10.0.0.1:60344.service: Deactivated successfully. Jan 23 19:07:18.187133 systemd[1]: session-51.scope: Deactivated successfully. Jan 23 19:07:18.194505 systemd-logind[1539]: Session 51 logged out. Waiting for processes to exit. Jan 23 19:07:18.202187 systemd[1]: Started sshd@51-10.0.0.36:22-10.0.0.1:60358.service - OpenSSH per-connection server daemon (10.0.0.1:60358). Jan 23 19:07:18.231151 systemd-logind[1539]: Removed session 51. Jan 23 19:07:18.452724 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 60358 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:07:18.458953 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:18.482395 kubelet[2823]: I0123 19:07:18.481848 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-bpf-maps\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482395 kubelet[2823]: I0123 19:07:18.481901 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-cilium-cgroup\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482395 kubelet[2823]: I0123 19:07:18.481931 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71a28dcd-f20b-4390-bf33-2cae82c3446b-clustermesh-secrets\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482395 kubelet[2823]: I0123 19:07:18.481959 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71a28dcd-f20b-4390-bf33-2cae82c3446b-cilium-config-path\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482395 kubelet[2823]: I0123 19:07:18.481983 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-cni-path\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482395 kubelet[2823]: I0123 19:07:18.482009 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71a28dcd-f20b-4390-bf33-2cae82c3446b-cilium-ipsec-secrets\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482839 kubelet[2823]: I0123 19:07:18.482033 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-lib-modules\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482839 kubelet[2823]: I0123 19:07:18.482057 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5f4m\" (UniqueName: \"kubernetes.io/projected/71a28dcd-f20b-4390-bf33-2cae82c3446b-kube-api-access-r5f4m\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482839 kubelet[2823]: I0123 19:07:18.482080 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-xtables-lock\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482839 kubelet[2823]: I0123 19:07:18.482101 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-host-proc-sys-net\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482839 kubelet[2823]: I0123 19:07:18.482126 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-hostproc\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.482839 kubelet[2823]: I0123 19:07:18.482163 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71a28dcd-f20b-4390-bf33-2cae82c3446b-hubble-tls\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.483137 kubelet[2823]: I0123 19:07:18.482203 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-etc-cni-netd\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.483137 kubelet[2823]: I0123 19:07:18.482228 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-cilium-run\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.486645 kubelet[2823]: I0123 19:07:18.484396 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71a28dcd-f20b-4390-bf33-2cae82c3446b-host-proc-sys-kernel\") pod \"cilium-bz9q9\" (UID: \"71a28dcd-f20b-4390-bf33-2cae82c3446b\") " pod="kube-system/cilium-bz9q9" Jan 23 19:07:18.488949 systemd[1]: Created slice kubepods-burstable-pod71a28dcd_f20b_4390_bf33_2cae82c3446b.slice - libcontainer container kubepods-burstable-pod71a28dcd_f20b_4390_bf33_2cae82c3446b.slice. Jan 23 19:07:18.541883 systemd-logind[1539]: New session 52 of user core. Jan 23 19:07:18.549105 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 23 19:07:18.770896 sshd[5306]: Connection closed by 10.0.0.1 port 60358 Jan 23 19:07:18.757449 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:18.814018 kubelet[2823]: E0123 19:07:18.811649 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:18.819881 containerd[1549]: time="2026-01-23T19:07:18.819031694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz9q9,Uid:71a28dcd-f20b-4390-bf33-2cae82c3446b,Namespace:kube-system,Attempt:0,}" Jan 23 19:07:18.855008 systemd[1]: sshd@51-10.0.0.36:22-10.0.0.1:60358.service: Deactivated successfully. Jan 23 19:07:18.864855 systemd[1]: session-52.scope: Deactivated successfully. Jan 23 19:07:18.930501 systemd-logind[1539]: Session 52 logged out. Waiting for processes to exit. Jan 23 19:07:18.959787 systemd[1]: Started sshd@52-10.0.0.36:22-10.0.0.1:60364.service - OpenSSH per-connection server daemon (10.0.0.1:60364). Jan 23 19:07:19.036524 systemd-logind[1539]: Removed session 52. Jan 23 19:07:19.164059 containerd[1549]: time="2026-01-23T19:07:19.163992291Z" level=info msg="connecting to shim fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259" address="unix:///run/containerd/s/1691a85da1f7982694fad539afc4f98be0c4bec34321a7f881ef6880224155e2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:07:19.267880 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 60364 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:07:19.272832 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:19.329375 systemd-logind[1539]: New session 53 of user core. Jan 23 19:07:19.373164 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 23 19:07:19.425950 systemd[1]: Started cri-containerd-fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259.scope - libcontainer container fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259. Jan 23 19:07:19.920094 containerd[1549]: time="2026-01-23T19:07:19.912450788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz9q9,Uid:71a28dcd-f20b-4390-bf33-2cae82c3446b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\"" Jan 23 19:07:19.927687 kubelet[2823]: E0123 19:07:19.915073 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:20.050232 containerd[1549]: time="2026-01-23T19:07:20.042069734Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:07:20.282493 containerd[1549]: time="2026-01-23T19:07:20.269627499Z" level=info msg="Container aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:07:20.336313 containerd[1549]: time="2026-01-23T19:07:20.336201700Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375\"" Jan 23 19:07:20.344732 containerd[1549]: time="2026-01-23T19:07:20.340133937Z" level=info msg="StartContainer for \"aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375\"" Jan 23 19:07:20.349162 containerd[1549]: time="2026-01-23T19:07:20.348817264Z" level=info msg="connecting to shim aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375" address="unix:///run/containerd/s/1691a85da1f7982694fad539afc4f98be0c4bec34321a7f881ef6880224155e2" protocol=ttrpc version=3 Jan 23 19:07:20.392387 kubelet[2823]: E0123 19:07:20.390802 2823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-npjcp" podUID="f0a3563d-f3ba-4661-95da-921f0820b95f" Jan 23 19:07:20.540454 systemd[1]: Started cri-containerd-aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375.scope - libcontainer container aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375. Jan 23 19:07:21.043216 containerd[1549]: time="2026-01-23T19:07:21.041767875Z" level=info msg="StartContainer for \"aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375\" returns successfully" Jan 23 19:07:21.208155 systemd[1]: cri-containerd-aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375.scope: Deactivated successfully. Jan 23 19:07:21.254387 containerd[1549]: time="2026-01-23T19:07:21.254055640Z" level=info msg="received container exit event container_id:\"aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375\" id:\"aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375\" pid:5388 exited_at:{seconds:1769195241 nanos:248196355}" Jan 23 19:07:21.547686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aee6b533a8abb91050cfd3419cc947a57686805eabe0b719b7e00a7281482375-rootfs.mount: Deactivated successfully. Jan 23 19:07:21.978104 kubelet[2823]: E0123 19:07:21.971895 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:22.061500 containerd[1549]: time="2026-01-23T19:07:22.057753951Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:07:22.223228 containerd[1549]: time="2026-01-23T19:07:22.222825188Z" level=info msg="Container 4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:07:22.290856 containerd[1549]: time="2026-01-23T19:07:22.289980382Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217\"" Jan 23 19:07:22.293210 containerd[1549]: time="2026-01-23T19:07:22.292726900Z" level=info msg="StartContainer for \"4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217\"" Jan 23 19:07:22.298072 containerd[1549]: time="2026-01-23T19:07:22.297466006Z" level=info msg="connecting to shim 4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217" address="unix:///run/containerd/s/1691a85da1f7982694fad539afc4f98be0c4bec34321a7f881ef6880224155e2" protocol=ttrpc version=3 Jan 23 19:07:22.385855 kubelet[2823]: E0123 19:07:22.382854 2823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-npjcp" podUID="f0a3563d-f3ba-4661-95da-921f0820b95f" Jan 23 19:07:22.445227 systemd[1]: Started cri-containerd-4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217.scope - libcontainer container 4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217. Jan 23 19:07:22.708004 containerd[1549]: time="2026-01-23T19:07:22.707942133Z" level=info msg="StartContainer for \"4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217\" returns successfully" Jan 23 19:07:22.769712 systemd[1]: cri-containerd-4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217.scope: Deactivated successfully. Jan 23 19:07:22.781638 containerd[1549]: time="2026-01-23T19:07:22.771954815Z" level=info msg="received container exit event container_id:\"4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217\" id:\"4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217\" pid:5435 exited_at:{seconds:1769195242 nanos:770484977}" Jan 23 19:07:22.937953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e3958a03b630e1914d6dec50bcb69ed5986d64899e4dbd3b6721fb318479217-rootfs.mount: Deactivated successfully. Jan 23 19:07:22.998490 kubelet[2823]: E0123 19:07:22.997867 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:23.038220 containerd[1549]: time="2026-01-23T19:07:23.036806312Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:07:23.135111 containerd[1549]: time="2026-01-23T19:07:23.132086860Z" level=info msg="Container fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:07:23.191117 kubelet[2823]: E0123 19:07:23.187085 2823 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:07:23.196762 containerd[1549]: time="2026-01-23T19:07:23.196050914Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988\"" Jan 23 19:07:23.208413 containerd[1549]: time="2026-01-23T19:07:23.204724653Z" level=info msg="StartContainer for \"fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988\"" Jan 23 19:07:23.218475 containerd[1549]: time="2026-01-23T19:07:23.214902544Z" level=info msg="connecting to shim fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988" address="unix:///run/containerd/s/1691a85da1f7982694fad539afc4f98be0c4bec34321a7f881ef6880224155e2" protocol=ttrpc version=3 Jan 23 19:07:23.384000 systemd[1]: Started cri-containerd-fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988.scope - libcontainer container fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988. Jan 23 19:07:23.694765 containerd[1549]: time="2026-01-23T19:07:23.688762265Z" level=info msg="StartContainer for \"fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988\" returns successfully" Jan 23 19:07:23.706843 systemd[1]: cri-containerd-fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988.scope: Deactivated successfully. Jan 23 19:07:23.721403 containerd[1549]: time="2026-01-23T19:07:23.720190806Z" level=info msg="received container exit event container_id:\"fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988\" id:\"fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988\" pid:5479 exited_at:{seconds:1769195243 nanos:715933166}" Jan 23 19:07:23.946999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb913257fa73fcb612cad4e011ed05b101c041df686fecc6b2420a7210e97988-rootfs.mount: Deactivated successfully. Jan 23 19:07:24.040408 kubelet[2823]: E0123 19:07:24.039965 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:24.084646 containerd[1549]: time="2026-01-23T19:07:24.084429231Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:07:24.150140 containerd[1549]: time="2026-01-23T19:07:24.147966796Z" level=info msg="Container 41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:07:24.153109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475567514.mount: Deactivated successfully. Jan 23 19:07:24.224824 containerd[1549]: time="2026-01-23T19:07:24.220081345Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce\"" Jan 23 19:07:24.230404 containerd[1549]: time="2026-01-23T19:07:24.229951736Z" level=info msg="StartContainer for \"41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce\"" Jan 23 19:07:24.253827 containerd[1549]: time="2026-01-23T19:07:24.251185329Z" level=info msg="connecting to shim 41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce" address="unix:///run/containerd/s/1691a85da1f7982694fad539afc4f98be0c4bec34321a7f881ef6880224155e2" protocol=ttrpc version=3 Jan 23 19:07:24.356926 systemd[1]: Started cri-containerd-41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce.scope - libcontainer container 41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce. Jan 23 19:07:24.385011 kubelet[2823]: E0123 19:07:24.382021 2823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-npjcp" podUID="f0a3563d-f3ba-4661-95da-921f0820b95f" Jan 23 19:07:24.552741 systemd[1]: cri-containerd-41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce.scope: Deactivated successfully. Jan 23 19:07:24.575408 containerd[1549]: time="2026-01-23T19:07:24.575025267Z" level=info msg="received container exit event container_id:\"41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce\" id:\"41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce\" pid:5519 exited_at:{seconds:1769195244 nanos:555494984}" Jan 23 19:07:24.588681 containerd[1549]: time="2026-01-23T19:07:24.585097813Z" level=info msg="StartContainer for \"41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce\" returns successfully" Jan 23 19:07:24.930780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41ed0609e73332878ec535ff82eabb74acc6e9bdedbb8ba5ff66150f096240ce-rootfs.mount: Deactivated successfully. Jan 23 19:07:25.082850 kubelet[2823]: E0123 19:07:25.082002 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:25.127403 containerd[1549]: time="2026-01-23T19:07:25.126979590Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:07:25.280640 containerd[1549]: time="2026-01-23T19:07:25.277999560Z" level=info msg="Container 0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:07:25.321202 containerd[1549]: time="2026-01-23T19:07:25.320897933Z" level=info msg="CreateContainer within sandbox \"fd4d873df38b2260ad79034ade34b59b2e1ef9ae77c4187697412f7400c19259\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840\"" Jan 23 19:07:25.328159 containerd[1549]: time="2026-01-23T19:07:25.324454319Z" level=info msg="StartContainer for \"0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840\"" Jan 23 19:07:25.331741 containerd[1549]: time="2026-01-23T19:07:25.331649033Z" level=info msg="connecting to shim 0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840" address="unix:///run/containerd/s/1691a85da1f7982694fad539afc4f98be0c4bec34321a7f881ef6880224155e2" protocol=ttrpc version=3 Jan 23 19:07:25.533485 systemd[1]: Started cri-containerd-0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840.scope - libcontainer container 0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840. Jan 23 19:07:25.963880 containerd[1549]: time="2026-01-23T19:07:25.961176264Z" level=info msg="StartContainer for \"0e5e301961a33f68e6bac52653ddb9e174050c53ac1cea0868ea2a13443e9840\" returns successfully" Jan 23 19:07:26.399484 kubelet[2823]: E0123 19:07:26.396902 2823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-npjcp" podUID="f0a3563d-f3ba-4661-95da-921f0820b95f" Jan 23 19:07:26.461736 kubelet[2823]: I0123 19:07:26.461111 2823 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T19:07:26Z","lastTransitionTime":"2026-01-23T19:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 19:07:27.155092 kubelet[2823]: E0123 19:07:27.152948 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:28.431972 kubelet[2823]: E0123 19:07:28.430491 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:28.628523 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 19:07:28.818469 kubelet[2823]: E0123 19:07:28.816052 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:43.197446 systemd-networkd[1468]: lxc_health: Link UP Jan 23 19:07:43.232209 systemd-networkd[1468]: lxc_health: Gained carrier Jan 23 19:07:44.563007 systemd-networkd[1468]: lxc_health: Gained IPv6LL Jan 23 19:07:44.829856 kubelet[2823]: E0123 19:07:44.827439 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:44.994413 kubelet[2823]: I0123 19:07:44.992989 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bz9q9" podStartSLOduration=26.992972796 podStartE2EDuration="26.992972796s" podCreationTimestamp="2026-01-23 19:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:07:27.297881674 +0000 UTC m=+495.696286324" watchObservedRunningTime="2026-01-23 19:07:44.992972796 +0000 UTC m=+513.391377406" Jan 23 19:07:45.471871 kubelet[2823]: E0123 19:07:45.471043 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:46.525846 kubelet[2823]: E0123 19:07:46.525805 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:48.350635 sshd[5360]: Connection closed by 10.0.0.1 port 60364 Jan 23 19:07:48.355061 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:48.381962 systemd[1]: sshd@52-10.0.0.36:22-10.0.0.1:60364.service: Deactivated successfully. Jan 23 19:07:48.384017 systemd-logind[1539]: Session 53 logged out. Waiting for processes to exit. Jan 23 19:07:48.394224 systemd[1]: session-53.scope: Deactivated successfully. Jan 23 19:07:48.394915 systemd[1]: session-53.scope: Consumed 1.124s CPU time, 28M memory peak. Jan 23 19:07:48.416128 systemd-logind[1539]: Removed session 53.